Featured FREE Whitepapers

What's New Here?

scala-logo

First steps into Scala

For over a full year now, I’ve been looking into Scala. I have heard many people talk about it passionately and it just got me interested. Also a lot of big companies are investing in this new language. I just figured I had to check it out. In my 4 years of Java programming, I’ve learned the Java EE stack, (Spring and Java EE) and I must say, not much has changed since then. We got the long awaited release of Java 7 (with lots of features missing) and in 2009 Sun released Java EE 6. Java EE 6 was a cool release and I have blogged about it for a few times but it is not more than just more abstraction of the same concepts. Spring hasn’t really been moving forward at all. When you look at the release which was used when I started programming professionally and the one we use in production, it is largely still the same version. There is a 3.0 release but to be honest, it is not very compelling.So after hearing people like Dick Wall talk about Scala for a long time, I decided to pick it up. The first thing I have tried was the Scala Koans. Scala cones is a project on github that helps you to learn a language by correcting failing tests. At the time the Scala Koans project was still in the very early stage and I did not get them to work and gave up rather quickly. My second attempt at Scala was when I had some spare time and I wrote a simple application which parsed some XML. This worked but in the end this only took me 5 minutes, looking back at it now, I did not use any language feature that was an added value. The only thing I did, was write Java in Scala. In the mean time I joined the Belgian Scala Enthousiasts and I’m following the mailing list, but still, I couldn’t really write a true Scala application. I didn’t even get the feeling I was hitting it off. At Devoxx 2011 I was determined to go and see the Scala talks and things that have to do with Scala. I ended up seeing 2 talks about the Play! framework, 3 talks about Scala and a talk about Akka. I did also talk to the guy’s at MongoDB and Typesafe. Devoxx 2011 was a real eye opener for me to get started with Scala. There were many reasons, first of all there were the talks. The talk about play! 2.0 showed me how to build a web application with Scala. It also demonstrated what a cool framework Play! is. The Akka talk showed me how to create super scalable and decoupled applications. It is written in Scala and integrated with the play! framework (version2.0) which was a plus for me. The talk from Matt Raible didn’t really show me something technical. His talk was about some technologies he wanted to learn (Scala, Play, Coffeescript, {Less}, scalate and jade). He wanted to talk on Devoxx so badly so he just submitted a talk. At the time when he submitted the talk he didn’t know any of the technologies he was going to talk about. He even waited to start learning them until his talk got accepted. Only after the approval, he started learning and blogging about these technologies and then showed what he built at Devoxx. His talk was about the same thing which I was trying to do for over a year, but never pushed through. It might sound corny, but this talk was the real boost for what I’m doing now.On the other hand I had 2 interesting chats in the downstairs hall.One was with a mongoDB guy. He showed me the API he had built for connecting with mongoDB from Scala. Unfortunately, I did not get his name… The other was with Henrik Engström, a developer who works at Typesafe (A company founded by Martin Odersky, the creator of the Scala language and next to Scala they also house the play! framework and akka).We just talked about how you can use scala in web applications. When I got back from Devoxx, I literally got home and downloaded the Scala runtime, the Typesafe stack (at the time this was only Scala, Akka and the Scala IDE) and the beta of the Play! framework. I have a small project in my head that I’ve been thinking about for some time now and I started implementing it. Based upon the 3 Scala examples with which the Play! 2.0 beta ships, I’m learning the language bit by bit. But there were a lot of language features I didn’t really grasp. I tried to look into the scala doc but got even more confused. I knew that Typesafe is offering a free book ‘Scala for the Impatient‘, so I decided to check it out. I downloaded that book and started reading it. Things cleared up immensely. I now understand why people say Scala isn’t complex, it just looks that way. Well it’s true. It also explains the weird things you see in the Scala API documentation. Now I’m working with Play! 2.0 and Scala. When I’m getting the hang of it, I’ll also try integrating Akka and probably deploy it on Heroku and see what it can do. I’m going to try to keep documenting my steps into Scala, Play! and Akka. I’ll see where It takes me. Reference: First steps into Scala from our JCG partner Jelle Victoor at the Styled Ideas blog Related Articles :Yes, Virginia, Scala is hard Scala use is less good than Java use for at least half of all Java projects Scala’s version fragility make the Enterprise argument near impossible How Scala changed the way I think about my Java Code Fun with function composition in Scala Scala Tutorial – SBT, scalabha, packages, build systems...
software-development-2-logo

Technology Related Classic Mistakes

In my last blog I looked at Product Related Classic Mistakes from Rapid Development: Taming Wild Software Schedules by Steve McConnell, which although it’s now been around for at least 10 years, and times have changed, is still as relevant today as when it was written. As Steve’s book states, classic mistakes are classic mistakes because they’re mistakes that are made so often and by so many people. They have predictably bad results and, when you know them, they stick out like a sore thumb and the idea behind listing them here is that, once you know them, you can spot them and hopefully do something to remedy their effect. Classic mistakes can be divided in to four types:People Related Mistakes Process Related Mistakes Product Related Mistakes Technology Related MistakesToday’s blog takes a quick look at the fourth of Steve’s categories of mistakes: Technology Related Mistakes, which include:Silver Bullet Syndrome Overestimated Savings From New Tools or Methods Switching Tools in the Middle of a Project Lack of Automated Source ControlSilver Bullet Syndrome Don’t expect the use of new technology or development tool to solve all your scheduling problems. Overestimated Savings From New Tools or Methods Use of new technology and practices can increase development time as their learning curves are climbed. New practices have new risks that you only discover by using them. Teams (or organisations) seldom improve their productivity in leaps and bounds. Aim for steady slow progress. Also overestimated is the saving that arise from code re-use. Code re-use is a very effective approach however wishful thinking comes into play and the savings usually not as high as expected. Switching Tools in the Middle of a Project Try not to upgrade your compiler, operating system etc. in the middle of a project. Apart from the fact that the installation takes time, the associated learning curve, rework and inevitable mistakes often cancel out the benefits of the new tool. Lack of Automated Source Control This was written 10 years ago, but it still amazingly happens, even though there are companies who will provide this service for you via the Internet. Failure to use automated source control exposes your project to needless risk. Use source control to its full. Reference: Technology Related Classic Mistakes from our JCG partner Roger Hughes at the Captain Debug’s Blog. Related Articles :People Related Classic Mistakes Process Related Classic Mistakes Product Related Classic Mistakes 2011: The State of Software Security and Quality Technical debt & the Boiling Frog...
software-development-2-logo

Personal gains from contributing to Open Source

Many may find it difficult to understand why certain people spend a lot of their spare time producing stuff without being paid and then give it away for free. Is this altruism on the edge of stupidity or are there personal benefits gained from participating in such activities? The act of charity and joy of programming arise but may not be the ultimate goal. The motives for participation is subjective but it seems that many does it to boost professional work in one way or another. Contribution Schools benefit greatly from reduced costs and many students would not have had the opportunity to get a computer science degree without the wealth of information and experience found in open source. Many corporates certainly also benefit from open source. Yes, some people actually develop feelings of wanting to give something back. Maybe not trying to make a difference but simply showing a token of gratitude to a community providing such a strong foundation for learning and education to anyone in society. Appreciation Programmers want others to use their stuff. We are social beings and it feels good to hear someone express their appreciation for your work. Appreciation motivates the will to understand different point of views, reduce insecurity and allow you to put others before yourself. Collaboration and social interaction create a feeling of belonging and coding for a community can make this activity even more energizing and enjoyable. Corporate companies sometimes have a tendency to give managers most of the props, which can be disappointing and demoralizing indeed. Reading emails of gratitude and receiving help from others can feel refreshing, especially for those who have been working under less gratifying conditions. Self-education This is your chance to work on projects and problems that excite and inspire you the most. A strong motivator for doing your best and reach creative heights. It may seem scary to know that your work will be reviewed and criticized publicly. But this is a tool for improving your skills, strengthening your attitude and habits towards quality. You will not code sloppy knowing that your work will be accessible anyone. The larger projects that have survived for years and continue to evolve often have great leadership, organization and development guidelines. Technical skill is just one of the many things to observe and absorb. There is also a chance that you will join a team and learn from people that are many levels better than yourself. Reputation Open sourcing will build a public resume that is accessible to anyone. It looks good to have worked on a open source projects, especially famous ones. Meritocracy has a tendency to arise so offering bug corrections, improvements and ideas will earn your peers/users recognition and enhance your reputation. But keep in mind that quality is key. People do not want to spend time on contributions not following guiding principles just because the contributor was too lazy to read them. Such a relationship can be quite stimulating as compared with the typical interaction trying to impress your manager, which interest usually lies with delivering on-time. Transparency also feeds honest and humble communication since nobody can hide bad or selfish decisions. Strong disagreement that otherwise may end in rudeness and cruelty behind closed doors are likely be discussed more calmly knowing that others observe. Control Most people wish for freedom to control their lives. It can be incredibly frustrating to work on a project with budget constraints where software is rushed into a unmanageable mess. Reorganization and outsourcing can also seed feelings of disappointment and helplessness. With open source you are no longer are a victim of such circumstances. You are free to implement and improve the features you think matters, while users help with finding relevance and set priorities. Reuse Most programmers develop an urge to not repeat themselves throughout their careers. Producing open source software is the freedom to truly reuse efforts when changing jobs (or starting your own company) and share them with anyone. These intentions stimulate thinking using broader perspectives and designs that are cooperative, flexible and adaptable to different environments in order to maximize opportunities for reuse. Keeping users loyal often means maintaining version compatibility and upgradability. Having to deal with all this complexity will make you a better programmer. And this is the right thing to do. Newton would have been proud to see this tradition of code-sharing and reuse. Reinventing wheels is a terrible waste of time and human skill. Many view patents as the direct opposite. A threat that prevents reuse and slow programmers down. Patents also encourage a culture where people build barriers instead of helping each other. It is understandable that patents make the open source community frown. Conclusion Open source is a lot about a community of freedom and sharing and it is not hard to see why open source developers often are highly respected. Participation will introduce you to a community of incredible talented, like-minded and caring people that may help improve your skills beyond imagination. Unexpected and exciting job opportunities may indeed arise, maybe at a company that will give you the fortune to produce open source software and get paid at the same time. Reference: Personal gains from contributing to Open Source  from our JCG partner Kristoffer Sjögren at the deephacks blog. Related Articles :Significant Software Development Developments of 2011 The Pragmatic Programmer – Review / Summary Notes. Writing Code that Doesn’t Suck How to Get Unstuck Are frameworks making developers dumb? The top 9+7 things every programmer or architect should know...
scala-logo

Scala for 2012? Deciding Whether to Invest In a Programming Language

I have found it both interesting and rewarding to learn a new programming language or major framework on a roughly yearly basis. If forced to self-identify with any single programming language, it would be Java. However, over the years, I’ve used C and C++ fairly extensively and have used and learned enough to be dangerous about several other languages including shell scripting languages, Perl, JavaScript, Pascal, C#, Ruby, JRuby, Groovy, PHP, and Python. Of the latter group of languages (Ruby, JRuby, Groovy, PHP, and Python), Groovy has had the most practical benefit for me, but I have learned valuable idioms, best practices, and different ways of thinking from using the other languages. In some years, I’ve not been as quick to learn a language in a year in which I’ve been learning either a major framework for that language or in a year in which a language I am familiar with undergoes significant changes. For example, Struts and the Spring Framework dominated my time as I learned each of them. JavaFX has similarly dominated my interest in recent weeks. When not working with a new language, I tend to focus on libraries and frameworks of the languages I am comfortable with. I have spent more time on Guava this year, for example. Learning a new language does provide many benefits. However, these benefits don’t come for free. There is always an opportunity cost associated with learning anything new. If a programming language is particularly different than what one is used to, this opportunity cost can be great. The opportunity cost can be manifest as many different things. It might be lower productivity than could be had using a known language. It might be missing out on learning a new framework, library, or approach in the more familiar language. The opportunity cost might be having to settle on fewer or choosing different features that better fit the new language. The opportunity cost may be as simple as not being able to do other things one would want to do and might have time to do if using a familiar language. Because there are so many potential opportunity costs associated with learning a new programming language, I try to be careful about which I invest my time in. I typically have a compelling reason for learning a new language. Compelling reasons might include specific advantages of a language (such as PHP for many Web 2.0-centric projects) or widespread use and “employability” of that language. Other reasons might be to learn new techniques that can be adapted to more familiar languages. Perhaps the most compelling reason I’ve learned a new language has been to read and maintain code or scripts that I have handed to me and am assigned responsibility for. For several years now, I’ve been somewhat curious about Scala, but have not yet committed myself to using it and learning it because other languages, frameworks, and tools have grabbed my attention. Typically, I’ve had some motivation that has made these tools, languages, or frameworks seem most worth my investment of time and energy. For example, the need to have a nice scripting language that meshes well in my Java development environment led me to Groovy. My attendance at JavaOne 2010 and JavaOne 2011, coupled with my interest in a modern GUI technology, has led to my interest in JavaFX. I spent time with Python after being in a position where I needed to read and modify Python scripts. I recently welcomed the opportunity to pose some questions to Scala creator Martin Odersky related to what is in Scala that might motivate me (and others) to invest time and energy into learning Scala. As I articulated the questions I had for Martin, I realized that these are really the things that I informally look at when investigating a new language. I typically spend an hour to two finding a language’s highest-level motivations first and only invest more time in that language if it seems to be a good potential fit for me. Martin has agreed to me posting my questions and his answers and they are shown next (I have added hyperlinks). Question: What is the most compelling/motivating reason or reasons that one might want to invest time in learning Scala as opposed to continuing use of Java (mostly for applications in my case) + Groovy (mostly for development environment scripting in my case)? For example, Groovy appealed to me at a high level as a way to script with libraries, idioms, and syntax that I was comfortable with from Java application development experience. There are actually quite a few different facets of Scala that individuals attach to for different reasons. Some are attracted by the succinct syntax and resulting productivity. Others gravitate to the stability of the JVM and runtime performance of a compiled language (versus an interpreted language, like Groovy), or the ability of a sophisticated type system to help programmers avoid errors that would otherwise crop up at runtime. Others find the functional style of programming to be a more natural way to reason about their application logic. One of the strongest attractors, from a practical point of view, is that Scala (and the rest of the Typesafe Stack — including Akka and Play) are designed to provide better tools to address the dual challenges of parallel and concurrent programming. With the advent of mainstream multicore/manycore hardware, and the increasing scale of applications that developers are charged to build, many industry developers are looking for higher level abstractions than threads and locks for building at this next scale. Many find that the functional style, immutable state, actor concurrency model, and other concepts at the heart of Scala make it simpler to build parallel and concurrent applications. Question: What are Scala’s biggest strengths, advantages, and innovative features? At a high level, Scala seeks to be a pragmatic language that scales from the smallest scripts to the largest distributed systems. One major thread of innovation in Scala is its unique blend of object-oriented Java with functional programming concepts. Scala’s libraries build on this foundation to provide outstanding support for concurrency and parallelism, for example through the actor programming model and the built-in parallel collections introduced in Scala 2.9. Scala’s expressive type system and syntax helps developers build more reliable code and greatly increase extensibility, especially for library developers and those building domain-specific languages (DSLs). Finally, it’s important not to overlook the fact that Scala is deeply integrated with Java, supporting blended Scala/Java projects and allowing developers to apply their skills and investments in Java immediately when they start working with Scala. Question: What are Scala’s biggest weaknesses, disadvantages, and plans for improvement? One of the challenges for a relatively young language like Scala is the maturity of tools. In particular, the Scala IDE for Eclipse has had its rough edges in the past — one of the reasons that Typesafe, as the leading commercial contributor to Scala, has invested substantial resources in overhauling the IDE with version 2.0 (just released in December 2011). Another challenge for adoption is that Scala does introduce with functional programming a new mode of thinking about programs, which takes some time to learn. It makes the transition gentle, because one can start writing Scala code like more concise Java code. But as Scala’s native library ecosystem grows chances are that newcomers to the language will come across to some of its more foreign features before they have developed a good understanding. To avoid culture shock, we need to develop a set of best practices and good tutorials that help the transition. “Programming in Scala“, which I have co-authored, is a comprehensive tutorial of the object/functional style. Cay Horstmann‘s “Scala for the Impatient“, available as a free preview on the Typesafe site, is a pragmatic, fast-paced introduction. Question: What situations/scenarios/use cases is Scala best and worst suited for? As described above, Scala and its frameworks like Akka and Play really shine for building systems that need to scale on multiple fronts — across cores, across machines in a cloud environment, and across large software teams. Traditionally, one area where Scala or other JVM-hosted languages would not be considered well suited would be lower-level systems programming. But interestingly, we see evidence of forward-looking systems developers increasingly embracing managed runtime languages like Scala because they face fundamental challenges in building reliable systems for the era of multicore hardware and distributed deployments. Martin’s responses validate some of my own conclusions about Scala from reading posts by Scala enthusiasts and even some of the detractors. In terms of motivation, I have a difficult time believing that learning Scala primarily as a scripting language will be very motivating because I’m pretty happy with Groovy for scripting. However, for development of applications, I tend to use Java and not Groovy and I wonder if it’s in that area where I’d be most likely to benefit from learning and using Scala. Once I determine that a language is worth investing in, the next step is deciding how to best learn it. Reading about it is a necessity, but using it is what really helps me learn it and also what helps me identify the things I don’t like about it. The trick is to come up with a somewhat realistic example that is easy enough to implement, but interesting enough to prove out some concepts. A “Hello World” is okay to get one’s feet wet, but doesn’t really test how the language fares for a developer’s specific needs. My favorite initial examples are ones that actually provide benefit in addition to being a mechanism for learning. For example, when I was learning Groovy, I developed several scripts early on that were helpful to me as scripts in their own right regardless of the language they were written in. In those cases, I gained familiarity with Groovy while also receiving other utilitarian benefits. The JavaWorld article Learn Scala with Specs2 Spring describes how a Java developer who uses the Spring Framework can use the author’s company‘s Specs2 Spring for integration testing and also benefit from “an efficient and safe way to learn the patterns of object-functional programming with Scala.” The entire premise of this article is exactly the kind of thing I like to do when learning a new language: combine legitimate benefit with learning of the new language. One other thing to think about when trying out a new language is to ensure that one is trying it out for the correct situations. This was easy for me with Groovy: I tried out Groovy first in situations in which I wanted the power of the JVM or the scope of the JDK, but wanted a scripting-friendly language. A developer can quickly decide a language is “not good” simply because the situation in which the language used is not a great fit for that language. Related to this, another issue I try to keep in mind when learning a new language is that it’s not fair to compare a language I know well and have spent thousands of hours with to a language that I’ve spent a few hours with. Unless I encounter some real deal-breakers early in the process, I try to not let “little problems” or things I don’t like about the new language prevent me from giving it a real chance. An excellent recent post on this is Rob Pike‘s Esmerelda’s Imagination. All that being stated, there are times I run into a true deal-breaker that makes me realize I should not invest any more time in a particular language because it doesn’t fit my needs. That doesn’t necessarily mean there’s anything wrong with the language, but simply that it doesn’t fit my needs well. An example of this would be using Java for a real-time system in Java’s early days. I think I’m almost ready to commit to spending more time with Scala. I’m not the type to make new-year resolutions, but it just so happens that it seems like the right time to give Scala a closer look. If I really do start to invest more time in Scala, my plan is to first re-read Bruce Eckel‘s Scala: The Static Language that Feels Dynamic and then read the A1 chapters of Scala for the Impatient, trying out and adapting examples. If I’m still interested in Scala after that, I can invest more at that time. Do I still have reservations about spending time on Scala? Of course. One of my biggest concerns is best articulated by someone who actually seems to have tried out Scala. Cédric Beust states, “In my experience with Scala, it’s hard not to like the language in the first week and it’s hard to still be in love with it after reading the 700+ pages of a book about it.” On the other hand, Casper Bang articulates well why I think I maybe I should spend time with Scala despite any other obvious motivations: “So I guess my point is, even if I do find Scala hyperbolish and biting over a bit too much; the majority of identifiable alpha-geeks that I track, are moving this way and as a practicing professional, I can not afford to ignore this.” The post Offbeat: Scala by the end of 2011 – No Drama but Frustration is Growing and the feedback comments related to that post are insightful and seem to reiterate some of the issues that Martin pointed out that Scala must deal with. In particular, when I look at the issue most likely to deter me from spending time on Scala, it is the risk that Scala may never take hold in mainstream development. If that turns out to be the case, then the primary advantage of learning Scala would be to change my way of thinking about things and that’s not always necessarily worth the opportunity cost and other costs. This post and the feedback comments contain multiple sides of the same issue and are another reminder that I probably need to do more with Scala to decide for myself how I feel about it. My plan as of right now is to invest significant time and effort into learning basics of Scala and applying it to some “realistic” examples. I even have plans to blog on what I learn. But, I have had these types of plans before and been distracted by some other shiny thing that has come my way. I think this time will be different, but I should know for certain by the end of 2012.   Reference: Scala for 2012? Deciding Whether to Invest In a Programming Language  from our JCG partner Dustin Marx at the Inspired by Actual Events  blog. Related Articles :Selecting a new programming language to learn Yes, Virginia, Scala is hard Scala’s version fragility make the Enterprise argument near impossible Scala use is less good than Java use for at least half of all Java projects Things Every Programmer Should Know...
git-logo

Git DVCS – Getting started

Git is a distributed revision control system, where every working directory is a full-fledged repository with complete history and full revision tracking capabilities.Git is categorized as DVCS (Distributed Version Control System), because is not dependant on a central server. So the academic way for working with Git is pushing/pulling data from/to each developer repository. This works in small teams or in a highly distributed development (open source projects that people are working around the world), but in mid-size teams or business companies, that require a central repository because of infrastructure/workflow process like Continuous Integration System, QA Checks before delivering, Environment Backups, External Manual Audits… seem that a traditional SCM should be desired. But this claim is far from reality, Git is still your VCS; how about creating a theoretical central repository? I say theoretical because in Git there is no central repository at a technical level. This repository will act as central because of convention. I call, and in many other posts also call this repository origin.A Git remote repository is a repository without working directory. Only composed by  .git project directory and nothing else.Nvie has created a nice schema of this topology:See that each developer pulls and pushes to origin, but also may exchange data with other peers. For example, if two or more developers are working on a new feature, they can push changes between them before pushing stable version to origin repository.Git is not tied to any particular transmission protocol, it supports transmitting changes via USB stick, email, …, or traditional way like HTTP, FTP, SSH, …So although Git has broken the typical SCM hub architecture to peer-to-peer structure, we can still create (by convention) a central repository for uploading stable code. And let me write again, “This central repo is just another node in the peer not THE REPOSITORY”.What I am going to explain is how to install and configure this “central repo” in an Ubuntu Server.We can say that Git only takes care of repository management and leaves transport operations to lower layers. A typical transport configuration for these central repos is using SSH protocol. So let’s install and configure a SSH server. (if you have already installed skip to next step).Install SSH Server:div>$ sudo apt-get install openssh-serverafter installed try:$ ssh <username>@<servername>Configure SSH Server:In /etc/ssh/sshd_config configure to only use SSH Protocol 2: Protocol 2Next step is to install Git: (You can skip this step if you have already installed).Install Git (not git-core package):$ sudo apt-get install gitThen execute Git command to check that has been installed correctly.Next step is creating a bare repository for the project. By convention, bare repository directories end with .git. So first thing to do is create a .git directory of project. Creating a bare repository from existing repository:$ git clone –bare my_project my_project.gitThis command transforms the /my_project/.git to my_project.git.Creating a new bare repository:If you are starting a new project you can initialize it directly as bare repository using:$ mkdir my_project.git $ cd my_project.git $ git –bare init Now all structure is created and ready to be transferred. Case that initial project was started on developer computer you should copy this directory (using scp for example) to origin.Then execute next command:$ git init –bare –sharedThis command will add propertly group read/write permissions.And now it is time to clone created repository to developer computer, I assume that developer has already an account in server (for connecting using ssh). So go to developer computer (or open another terminal) and type next command: $ git clone <username>@<servername>:/<directories>/my_project.git If user has read permissions to my_project.git directory, repository will be downloaded to local computer. Write permissions are required for checking in changes. And now I suppose you are thinking that it was so easy creating a remote repository, but now another problem arises. If your company is small you can manually create a new user into your server for each developer, it should be easy to manage, but if your company is bigger, then management of all users is hard. You must create an account for each one, and more important, they will have access to server shell using ssh (not only for uploading code) or ftp, …, and this fact implies a problem with security, you should take care of what a user can do and what cannot do in his shell. So arrived at this point, one can setup accounts for everyone, which is straightforward but can be cumbersome. Another way is using an LDAP or any other centralized system, but this is alien topic for this post. A second method is to create an account called “git” on the server, and ask every user who will have  access, to send its SSH public key, and add that key to the .ssh/authorized_keys file of “git” user. I am sure that this approach sounds you familiar (github way?). So let’s explain this way: First of all each user should send you its public key, (they can find in .ssh directory *.pub file), or simply create new, using ssh-keygen command. See this tutorial for learning how to generate both keys http://github.com/guides/providing-your-ssh-key. Setting up Git server with user public keys: First step is create a git user with .ssh directory. #from server $ sudo adduser git $ su git $ cd $ mkdir .ssh Next step is create authorized_keys file where all public keys will be stored: For example: #from server $ cat id_dsa.user1.pub >> ~/.ssh/authorized_keys $ cat id_dsa.user2.pub >> ~/.ssh/authorized_keys And now each developer, with public key published in authorized_keys and private key in his own .ssh directory, has access to repository. Let’s try, open another terminal (would be developer machine in real scenario) and try to clone existing repo from server: #from developer computer $ git clone git@<servername>:<directories>/my_project.git After repository is cloned to developer computer, modifications can be made and pushed them. And now you can say, “Ok, I don’t have to create one account for each developer but I am still having a problem with security“, each developer still has access to shell. Yes it is true, but you can easily restrict the “git” user to only doing Git activities with a limited shell called git-shell. Next step is specifying git-shell instead of bash for Git user, in /etc/passwd. $ sudo vim /etc/passwd and change git:x:1000:1000::/home/git:/bin/sh to git:x:1000:1000::/home/git:/usr/bin/git-shell Now your server is secured, only Git operations are allowed using “git” account with users that have sent their SSH public key. You have your central remote repository configured and ready to be used; at this point you may consider install Git tools like gitweb, gitosis or gitolite, but in this post are off topic. I hope you have found this post useful. Reference: Git DVCS – Getting started  from our JCG partner Alex Soto at the One Jar To Rule Them All blog Related Articles :Services, practices & tools that should exist in any software development house, part 1 Dealing with technical debt Diminishing Returns in software development and maintenance This comes BEFORE your business logic! When Inheriting a Codebase, there are more questions than answers… Java Tools: Source Code Optimization and Analysis...
jcg-logo

Best Of The Week – 2011 – W53

Hello guys, Time for the “Best Of The Week” links for the week that just passed. Here are some links that drew Java Code Geeks attention: * 11 Things every Software Developer should be doing in 2012: A great list of things that developers should be doing in the year to come. Perfect for new year’s resolutions :-). Also check out Things Every Programmer Should Know. * Practical Garbage Collection, part 1 – Introduction: Must read article for an introduction to Garbage Collection. Enough said. * Android SDK: Build a Mall Finder App – Mapview & Location: A very helpful tutorial that shows how to use both the integrated Google Maps functionality and the location based capabilities of Android. Also check out Android Google Maps Tutorial and Android Location Based Services Application. * Smelly Communication: How the Suits should assign tasks to Geeks: This article discusses the communication problems between the “Suits” (Marketing, Sales, Creative folks) and the “Geeks” (Devs, Ops and Infra folks) which stem from a fundamental gap in understanding between translating the language of business needs into the language of technical requirements. * Provisioning of Java web applications using Chef, VirtualBox and Vagrant: This DevOps tutorial describes how to set up Chef (configuration management), VirtualBox (virtualization) and Vagrant in order to deploy a very simple web application to a Tomcat instance in a dynamic topology of virtualized servers. * Java Servlet 3.0 Tutorial: WebServlet Annotations with NetBeans 7, Jetty and Maven: A quick tutorial on how to implement a 3.0 Servlet using WebServlet annotations and how to deploy it on Jetty using Maven and NetBeans. Also check out Servlet 3.0 Async Processing for Tenfold Increase in Server Throughput and JAX–WS with Spring and Maven Tutorial. * 10 Indispensable NOC Tools for Linux and BSD: As the title suggets, some Linux/BSD based tools that will help you with the NOC administration. Tools like Nagios, Zenoss, CloudPassage, Htop, Xen and others are suggested. * OSGi: An Introduction: A nice introductory article to OSGi, it demonstrates how to use the framework in order to build modular systems. Eclipse and the Equinox platform are used for this example. Also check out OSGi – Simple Hello World with services and OSGi Using Maven with Equinox. * Integrating Lucene with HBase: A very interesting article that describes how to integrate Lucene (search library) with HBase (NoSQL data storage) in order to build a highly scalable search implementation. For this, a memory-based backend is used as an in memory cache and a mechanism for synchronizing this cache with the HBase backend is implemented. * The Rise of Application Analytics: A New Game Demands New Rules: This article discusses the emerging application analytics market, which offers unprecedented insight into how software is being used in the real world and gives multiple stakeholders a reliable way to measure and manage development investments. Application analytics correlates behavior with business results and extends beyond browser clicks on a web page. That’s all for this week. Stay tuned for more, here at Java Code Geeks. Cheers, Ilias Related Articles:Best Of The Week – 2011 – W52 Best Of The Week – 2011 – W51 Best Of The Week – 2011 – W50 Best Of The Week – 2011 – W49 Best Of The Week – 2011 – W48 Best Of The Week – 2011 – W47 Best Of The Week – 2011 – W46 Best Of The Week – 2011 – W45 Best Of The Week – 2011 – W44 Best Of The Week – 2011 – W43...
javafx-logo

What I Learnt about JavaFX Today

In case you haven’t heard, JavaFX 2 is the new Desktop / web / client framework for Java. It’s had a considerable overhaul since JavaFX 1 (which was frankly not that impressive). Out has gone the custom scripting language, and instead you can write it using standard Java and an XML-based language for the actual UI presentation. So today, a friend and I got together at one of our places to teach ourselves a bit of JavaFX. Here’s what we learned, starting with some of the yak-shaving we had to do:First of all, install the JavaFX developer preview – get it here You have to unzip it, and place the resulting directory somewhere sensible, chown’d to root.I put it in /usr/local/javafx-sdk2.1.0-beta/Next, you’ll want an IDE to go with thatNetbeans is the IDE which is the most advanced and usable with JavaFX 2 You want Netbeans 7.1 RC2To get this to install on a Mac, you need JavaForMacOSX10.7.dmg – no lower version of official Apple Java will do, and an OpenJDK build won’t work either (even if it’s the correct version or higher) Once it’s installed, Netbeans will work fine with other JREs (I was mostly running it against the Java 7 Developer Preview) To start new JavaFX projects, you need to tell NetBeans where to find JavaFX. For this, you need to create a new JavaSE platform profile, and add the JavaFX dependencies in manually.Once it was installed, we started working with JavaFX properly. Our project for the day was to try to replicate some of Victor Grazi’s concurrency animations in JavaFX – both to teach ourselves the JavaFX technology, and also create some teaching tools as outputs.JavaFX uses Application as the main class to subclass The API docs are hereIf you’ve done any Flex development, JavaFX will seem very natural. E.g.The FXML file provides the UI and layout The top level FXML element has a fx:controller attriubte, which defines the Control for this View FXML elements are bound to members contained in the controller class which have been annotated with the @FXML annotation The fx:id property is used to define the name of the member that is being bound to the FXML element Binding also occurs to methods. E.g. buttons bind use an onAction handler, like this: onAction="#isFutureDone" The #methodName syntax is used to say which method should be called when the button is pressed.From this it’s very easy to get started with building up a basic application. Some things that we found:The UI thread can be quite easy to tie up. Don’t ever call a blocking method directly from the Control object, as triggering this code path on the UI thread will cause the display to hang. Be careful of exception swallowing. If you have a method in an object which is updating a UI element, but which is not annotated with @FXML, then you seem to need to call requestLayout() on the UI element after updating it. We’re not sure we got to the bottom of why – please enlighten us if you know why. The framework seems to use custom classloading to transform the FXML file into a “scene graph” of objects, seemingly a bit like how Spring does it.On the whole, we were quite impressed with our short hack session. The APIs seem clean, and the overall design of the framework seems sound. There were a few stability issues, but this is bleeding-edge tech on Mac – both the JDK and the JavaFX runtime are Developer Previews. We’ll definitely be back to do some more with JavaFX, and look forward to seeing it mature and become a fully-supported OSS framework for client development in Java. Reference: What I Learnt about JavaFX Today  from our JCG partner Martijn Verburg  at the Java 7 Developer Blog. Related Articles :Migrating from JavaFX 1.3 to JavaFX 2.0 JavaFX 2.0 beta sample application and after thoughts JavaOne is Rebuilding Momentum Sometimes in Java, One Layout Manager Is Not Enough...
software-development-2-logo

Simplifying RESTful Search

Overview REST architectural pattern is based around two basic principles:Resources as URLs: A resource is something like an entity or a noun in modelling lingo. Anything on a web is identified as a resource and each unique resource is identified by a unique URL. Operations as HTTP methods: REST leverages existing HTTP methods, particularly GET, PUT, POST, and DELETE which map to resource’s read, create, modify and removal operations respectively.Any action performed by a client over HTTP, contains an URL and a HTTP method. The URL represents the resource and the HTTP method represents the action which needs to be performed over the resource. Being a broad architectural style, REST always have different interpretations. The ambiguity is exacerbated by the fact that there aren’t nearly enough HTTP methods to support common operations. One of the most common examples is the lack of a ‘search’ method. Search being one of the most extensively used features across different applications, but there have been no standards for implementing this feature. Due to this different people tend to design search in different ways. Given that REST aims to unify service architecture, any ambiguity must be seen as weakening the argument for REST. Further in this document, we shall be discussing how search over REST can be simplified. We are not aiming at developing standards for RESTful search, but we shall be discussing how this problem can be approached. Search Requirements Search being mostly used feature across different web applications, supports almost similar features around different applications. Below is the list of some common constituents of search features: Search based on one or more criteria at a time Search red colored cars of type hatchback color=red && type=hatchbackRelational and conditional operator support Search red or black car with mileage greater than 10 Colour=red|black && mileage > 10Wild card searchSearch car manufactured from company name starting with M company=M*PaginationList all cars but fetch 100 results at a time upperLimit=200 && lowerLimit=101Range searchesGet me all the cars launched between 2000 and 2010 launch year between (2000, 2010)When we support search with such features, search interface design itself becomes complex. And when implemented in a REST framework, meeting all these requirements (while still conforming to REST!) is challenging. Coming back to the basic REST principles, we are now left with following two questions:Which HTTP method to use for “search”? How to create effective resource URL for search?Query parameters versus Embedded URLs Modelling filter criteriaHTTP Method Selection Query Criteria vs. Embedded Criteria: Effectively, REST categorizes the operations by its nature and associates well-defined semantics with these categories. The idempotent operations are GET, PUT and DELETE (GET for read-only, PUT for update, DELETE for remove).   While POST method is used for non-idempotent procedures like create. By the definition itself, search is a read only operation, which is used to request for a collection of resources, filtered based on some criteria. So, GET HTTP method for search feature is an obvious choice. However, with GET, we are constrained with respect to URL size if we add complex criteria in the URL. URL Representation Let’s discuss this using an example: a user wish to search four-doored sedan cars of blue color; how shall the resource URL for this request look like? Below two different URLs are syntactically different but semantically same:/cars/?color=blue&type=sedan&doors=4 /cars/color:blue/type:sedan/doors:4Both of the above URLs conform to RESTful way of representing a resource query, but are represented differently. First one uses URL query criteria to add filtering details while the later one goes by an embedded URL approach. The embedded URL approach is more readable and can take advantage of the native caching mechanisms that exist on the web server for HTTP traffic. But this approach limits user to provide parameter in a specific order. Wrong parameter positions will cause an error or unwanted behaviour. Below two looks same but may not give you correct results/cars/color:red/type:sedan /cars/type:sedan/color:redAlso, since there’s no standardization for embedding criteria, people may tend to device their own way of representation. So, we consider query criteria approach over the embedded URL approach, though the representation is a bit complex and lacks readabilityModeling Filter Criteria: A search-results page is fundamentally RESTful even though its URL identifies a query. The URL shall be able to incorporate SQL like elements. While SQL is meant to filter data fetched from relational data, the new modelling language shall be able to filter data from hierarchical set of resources. This language shall help in devising a mechanism to communicate complex search requirements over URLs. In this section further, two such styles are discussed in detail.Feed Item Query Language (FIQL): The Feed Item Query Language (FIQL, pronounced “fickle”) is a simple but flexible, URI-friendly syntax for expressing filters across the entries in a syndicated feed. These filter expressions can be mapped at any RESTful service and can help in modelling complex filters. Below are some samples of such web URLs against their respective SQLs.SQLREST Search URLsselect * from actors where firstname=’PENELOPE’ and lastname=’GUINESS’ /actors?_s=firstname==PENELOPE;lastname==GUINESSselect * from actors where lastname like ‘PEN%’ /actors?_s=lastname==PEN*select * from films where filmid=1 and rentalduration <> 0 /films?_s=filmid==1;rentalduration!=0select * from films where filmid >= 995 /films?_s=filmid=ge=995select * from films where release date < ‘27/05/2005’ /film?_s=releasedate=le=2005-05-27T00:00:00.000%2B00:00Resource Query Language (RQL) : Resource Query Languages (RQL) defines a syntactically simple query language for querying and retrieving resources. RQL is designed to be URI friendly, particularly as a query component of a URI, and highly extensible. RQL is a superset of HTML’s URL encoding of form values, and a superset of Feed Item Query Language (FIQL). RQL basically consists of a set of nestable named operators which each have a set of arguments and operate on a collection of resources.Casestudy: Apache CXF advance search features To support advance search capabilities Apache CXF introduced FIQL support with its JAX-RS implementation since 2.3.0 release. With this feature, users can now express complex search expressions using URI. Below is the detailed note on how to use this feature: To work with FIQL queries, a SearchContext needs be injected into an application code and used to retrieve a SearchCondition representing the current FIQL query. This SearchCondition can be used in a number of ways for finding the matching data. @Path("books") public class Books { private Map books; @Context private SearchContext context;@GET public List getBook() {SearchCondition sc = searchContext.getCondition(Book.class); //SearchCondition is method can also be used to build a list of// matching beans iterate over all the values in the books map and // return a collection of         matching beans List found = sc.findAll(books.values()); return found; } } SearchCondition can also be used to get to all the search requirements (originally expressed in FIQL) and do some manual comparison against the local data. For example, SearchCondition provides a utility toSQL(String tableName, String… columnNames) method which internally introspects all the search expressions constituting a current query and converts them into an SQL expression: // find all conditions with names starting from 'ami' // and levels greater than 10 : // ?_s="name==ami*;level=gt=10" SearchCondition sc = searchContext.getCondition(Book.class); assertEquals("SELECT * FROM table WHERE name LIKE 'ami%' AND level > '10'", sq.toSQL("table")); Conclusion Data querying is a critical component of most applications. With the advance of rich client-driven Ajax applications and document oriented databases, new querying techniques are needed; these techniques must be simple but extensible, designed to work within URIs and query for collections of resources. The NoSQL movement is opening the way for a more modular approach to databases, and separating out modelling, validation, and querying concerns from storage concerns, but we need new querying approaches to match more modern architectural design.   Reference: Guava’s Strings Class from our JCG partner Dustin Marx at the Inspired by Actual Events blog. ...
zk-logo

ZK Web Framework Thoughts

I’ve been asked several times to present some of my opinions about ZK. So, based of my experience of 4 years as a ZK user, here’s some thoughts: Overall developer experience, the community and documentation “It just works” Most of the stuff that ZK offers works very well, and the functionality is usually very intuitive to use if you have developed any desktop Java applications before. In 2007 I did a comparison of RIA technologies that included Echo2, ZK, GWT, OpenLaszlo and Flex. Echo2 and OpenLaszlo felt incomplete and buggy and didn’t seem to have proper Maven artifacts anywhere. GWT seemed more of a technical experiment than a good platform to build on. Flex was dropped because some important Maven artifacts were missing and Flash was an unrealistic requirement for the application. On the other hand, ZK felt the most “natural” and I was able to quickly get productive with it. During my 4 year long journey with ZK, I’ve gotten plenty of those “wow” moments when I’ve learned more and more of ZK and improved my architectural understanding of the framework. Nowadays I’ve got a pretty good understanding of what in ZK works, what doesn’t, and what has problems and what doesn’t. But still, after gaining all this good and bad insight, I consider ZK to be a very impressive product out of the box. The downside of this is of course the fact that the framework hides a lot of things from newcomers in order to be easy to use, and some of these things will bite you later on, especially if your application has lots of users. It’s very, very, very flexible ZK is very flexible and has plenty of integrations. Do you want use declarative markup to build component trees? Use ZUL files. Do you want to stick to plain Java? Use richlets. You can also integrate JSP, JSF, Spring, and use plenty of languages in zscript. The core framework is also pretty flexible and you can override a lot of stuff if you run into problems. The downside is that there are very many ways of doing things correctly, and even more ways of screwing up. Flexibility itself is not a negative point, but I think that the ZK documentation doesn’t guide users enough towards the best practices of ZK. What are the best practices anyway? Many tutorials use zscript, but the docs also recommend to avoid it due to performance reasons. The forum is quite active I think that the ZK forum is one of the best places to learn about ZK. It’s pretty active and the threads vary from beginner level to deep technical stuff. I read the forums myself almost every day and sometimes help people with their problems. There’s one thing that troubles me a bit: the English language in the forums isn’t usually very good and people often ask too broad questions. I know, it’s not fair to criticize the writings of non-native English speakers, especially when I’m not a native speaker myself. Regardless, I think that such a barrier exists. For example, take 5 random threads from the ZK forum and Spring Web forum. The threads in the Spring forums are typically more detailed and focused instead of “I’m a newbie and I need to create application x with tons of features, please tell me how to do everything”-type threads you see in the ZK forums and people clearly spend some time formulating good and detailed questions. You’ll see that you have to spend a bit more time in the ZK forum in order to understand the threads. It’s not anybody’s fault or anything, nor a bad thing, this is just an observation. Unfortunately for me it means that some of my limited time I have for the ZK community is spent just trying to understand what people are saying. Usually I answer a thread only when I know the answer right away, or if the thread concerns some deep technical stuff. There’s plenty of documentation In the past the ZK documentation was scattered, out of date and some of the more important stuff was completely missing. In the recent years the docs have improved a lot, and there’s now separate comprehensive references for ZK configuration, client-side ZK, and styling. I think the documentation is today very good, and most basic questions can be easily answered by reading the docs. As I mentioned above, ZK has a tendency to “just work”. The overall technical quality is impressive and on par with most Java web frameworks, but I believe there are some parts of ZK that are less impressive. Stuck on Java 1.4 ZK is built with Java 1.4, which greatly limits the flexibility of their API and internal code quality Negative effects on ZK internal codeThreadLocals not removed with remove() (calling set(null) does prevent leaking the contained object but does not properly remove a ThreadLocal)! Lots of custom synchronization code where simple java.util.concurrent data structures or objects would work (ConcurrentHashMap, Semaphore, Atomic*, etc) StringBuffer is used where StringBuilder would be appropriateNo annotations Personally I’m not a fan of annotation-heavy frameworks because annotations are an extralinquistic feature and usually you end up annotations with string-based values that have no type safety. However, I know that some people would be overjoyed to have an API based on them. No enums There are many places in the ZK API where proper enums would be much better than the hacks that are used at the moment. The worst offender is Messagebox. Just look at this signature: public static int show(String message, String title, int buttons, java.lang.String icon, int focus) Ugh..the magic integers remind me of SWT (which is a great library with an awful API). Let’s imagine an alternative version with enums and generics: public static Messagebox.Button show(String message, String title, Set<Messagebox.Button> buttons, Messagebox.Icon icon, Messagebox.Button focus) Much, much better and more typesafe. No more bitwise OR magic. I could code this in 10 minutes into ZK if it would use Java 1.5. No generics This is the worst part of being stuck on Java 1.4. I’ll just list some of the places where I’d like to see generics: Collection values in API signatures Example in org.zkoss.zk.ui.util.Initiator: void doInit(Page page, Map args); vs void doInit(Page page, Map<String, Object> args); Example in org.zkoss.zk.ui.Component: List getChildren(); vs List<Component> getChildren(); Collection-like classes Example in ListModel: public interface ListModel { ... Object getElementAt(int index); ... } vs public interface ListModel<T> { ... T getElementAt(int index); ... } All ListModel* classes should also be generic (most extend java.util.Collection). org.zkoss.zk.ui.event.EventListener public interface EventListener { public void onEvent(Event event); } vs public interface EventListener<T extends Event> { public void onEvent(T event); } org.zkoss.zk.ui.util.GenericAutowireComposer public class GenericAutowireComposer { protected Component self; ... } vs public class GenericAutowireComposer<T extends Component> { protected T self; ... } All *Renderer classes Example in org.zkoss.zul.RowRenderer: public interface RowRenderer { void render(Row row, Object data); } vs public interface RowRenderer<T> { void render(Row row, T data); } Unimpressive server push implementations The default PollingServerPush has latency and will absolutely kill your application server if there are many active users. CometServerPush is better, but it does not use non-blocking IO and will block servlet threads in your servlet container. Let’s put this into perspective: Tomcat 7.0 default configuration sets connector max threads to 200. This means that if you have 200 comet-enabled desktops, Tomcat will stop responding to other requests because all the threads are in use by comet. If the implementation used Servlet 3.0 or container-specific async APIs instead, you could run Tomcat even with one thread. It would of course be slow but it would not stop working! Also, CometServerPush requires ZK EE so regular users are stuck with PollingServerPush. I’d say that’s a pretty big limitation considering how server push is marketed. However, it’s not surprising: proper non-blocking comet is hard to implement and requires non-blocking components in all parts of the pathway from the browser to the servlet code. Zscript I don’t like zscript. It might have been a good feature many years ago, but I believe that today it should not be used at all. Why, oh why would someone want to replace typesafe compiled Java code with non-typechecked zscript mixed with ZUL templates?“I can use Python/Ruby/…”. This might be a valid point for some people but you’ll end up with unmaintainable code mangled inside ZUL templates “Changes are visible when you save the file”. True, but I would never sacrifice so much just for this feature. And besides, you can get a similar effect with JRebel.So, if you put “Java code” (=BeanShell code) in zscript, you might want to rethink that. Reliance on reflection Many useful features rely on reflection, which limits what things the compiler can check for you. This is very typical thing in many Java libraries/frameworks, so it’s not really a ZK-specific thing. As a Scala user I can see how the limitations of Java have guided most frameworks to the path of reflection/annotations. Reflection cannot always be avoided but I think it’s a bad sign if most of the useful features rely on reflection. Here are some features in ZK that use reflection:Any kind of event listening that does not use component.addEventListener. This includes any classes that extend GenericEventListener (such as all ZK-provided Composer classes except MultiComposer) Data binding EL expressions in ZUL templatesReference: Thoughts about the ZK Web Framework: Overall experience  & Thoughts about the ZK Web Framework: Technical stuff from our JCG partner Joonas Javanainen at the Jawsy Solutions technical blog Related Articles :Getting Started with SmartGWT for awesome GWT interfaces Advanced SmartGWT Tutorial, Part 1 Securing GWT apps with Spring Security GWT EJB3 Maven JBoss 5.1 integration tutorial Spring MVC3 Hibernate CRUD Sample Application Spring MVC Development – Quick Tutorial...
jcg-logo

Top 10 JavaCodeGeeks posts for 2011

2011 is coming to its end, and like last year, we have created a compilation of the Top 10 Java Code Geeks posts for this year. This compilation serves as a reminder of our best moments for the year that is ending. The posts ranking was performed based on the absolute number of page views per post, not necessarily unique. It includes only articles published in 2011. So, let’s see in ascending order the top posts for 2011. 10) The top 9+7 things every programmer or architect should know This articles comprises a nice compilation of thoughts and topics about software development from very experienced authors. Hints and tips on things that programmers and architects should know. Closely related to an older article of ours, Things Every Programmer Should Know. 9) Hate Java? You’re fighting the wrong battle. An article that explains why hating Java is futile. The author provides some reasons on why Java is looked down by some developers nowadays and claims that those “accusations” are quite irrelevant. 8) Java Fork/Join for Parallel Programming This tutorial is a nice introduction to parallel programming with Java’s Fork/Join framework, published before the launch of Java 7 (which integrates those features into the JDK). It provides some examples on how to use the Fork/Join framework, which is designed to make divide-and-conquer algorithms easy to parallelize. 7) Android Quick Preferences Tutorial An Android development tutorial. It explains how to use the native preferences framework in order to show, save and manipulate user’s preferences very easily. Don’t forget to check out the Android snippets section in our Java Examples and Code Snippets site. 6) RESTful Web Services with RESTeasy JAX-RS on Tomcat 7 – Eclipse and Maven project The use of RESTful web services is constantly rising and this tutorial explains how to implement RESTful services using JBoss RESTeasy. The application is built with Maven and gets deployed on a Tomcat instance. Nice tutorial to get you started with REST. 5) Spring MVC Development – Quick Tutorial This tutorial will kickstart you with your web applications allowing to leverage Spring’s web framework. Spring MVC enables easy web application development with a framework based on the Model View Controller architecture (MVC) pattern. 4) Android JSON Parsing with Gson Tutorial The existence of this tutorial in the Top 10 was quite a surprise to me, especially since it rose to such a high position. It shows that Android developers look for efficient ways to handle JSON data and Google Gson is an elegant solution for this. 3) 10 Tips for Proper Application Logging A compilation of tips on logging and how to properly use it in your applications, highly recommended. The topic of logging is enormous so make sure to also check out The Java Logging Mess and Configure LogBack Logging with Spring. 2) Android Google Maps Tutorial Another Android hit. This tutorial shows how to integrate Google Maps into your Android application. The well established Google Maps API is used under the hood in order to bring the power of Google Maps to your Android applications. After this, also check out Android Location Based Services Application. 1) Funny Source Code Comments The most popular Java Code Geeks post for 2011 and, all in all, second only to our all time classic GWT 2 Spring 3 JPA 2 Hibernate 3.5 Tutorial. It is a collection of funny source code comments, provided by developers all over the world. Take a look at it, it could definitely make your day. That’s it guys. Our top posts for 2011. I hope you have enjoyed our blog during the past year and that you will continue to provide your support in the year to come. Happy new year everyone! From the whole Java Code Geeks team, our best wishes! Ilias Tsagklis ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books