Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

vagrant-logo

Capistrano: Deploying to a Vagrant VM

I’ve been working on a tutorial around thinking through problems in graphs using my football graph and I wanted to deploy it on a local vagrant VM as a stepping stone to deploying it in a live environment. My Vagrant file for the VM looks like this:                 # -*- mode: ruby -*- # vi: set ft=ruby :Vagrant::Config.run do |config| config.vm.box = "precise64"config.vm.define :neo01 do |neo| neo.vm.network :hostonly, "192.168.33.101" neo.vm.host_name = 'neo01.local' neo.vm.forward_port 7474, 57474 neo.vm.forward_port 80, 50080 endconfig.vm.box_url = "http://files.vagrantup.com/precise64.box"config.vm.provision :puppet do |puppet| puppet.manifests_path = "puppet/manifests" puppet.manifest_file = "site.pp" puppet.module_path = "puppet/modules" end end I’m port forwarding ports 80 and 7474 to 50080 and 57474 respectively so that I can access the web app and neo4j console from my browser. There is a bunch of puppet code to configure the machine in the location specified. Since the web app is written in Ruby/Sinatra the easiest deployment tool to use is probably capistrano and I found the tutorial on the beanstalk website really helpful for getting me setup. My config/deploy.rb file which I’ve got Capistrano setup to read looks like this: require 'capistrano/ext/multistage'set :application, "thinkingingraphs" set :scm, :git set :repository, "git@bitbucket.org:markhneedham/thinkingingraphs.git" set :scm_passphrase, ""set :ssh_options, {:forward_agent => true} set :default_run_options, {:pty => true} set :stages, ["vagrant"] set :default_stage, "vagrant" In my config/deploy/vagrant.rb file I have the following: set :user, "vagrant" server "192.168.33.101", :app, :web, :db, :primary => true set :deploy_to, "/var/www/thinkingingraphs" So that IP there is the same one that I assigned in Vagrantfile. If you didn’t do that then you’d need to use ‘vagrant ssh’ to go onto the VM and then ‘ifconfig’ to grab the IP instead. I figured there was probably another step required to tell Capistrano where it should get the vagrant public key from but I thought I’d try and deploy anyway just to see what would happen. $ bundle exec cap deploy It asked me to enter the vagrant user’s password which is ‘vagrant’ by default and I eventually found a post on StackOverflow which suggested changing the ‘ssh_options’ to the following: set :ssh_options, {:forward_agent => true, keys: ['~/.vagrant.d/insecure_private_key']} And with that the deployment worked flawlessly! Happy days.   Reference: Capistrano: Deploying to a Vagrant VM from our JCG partner Mark Needham at the Mark Needham Blog blog. ...
java-logo

On Java 8’s introduction of Optional

I had recently discovered the JDK 8′s addition of the Optional type. The Optional type is a way to avoid NullPointerException, as API consumers that get Optional return values from methods are “forced” to perform “presence” checks in order to consume their actual return value. More details can be seen in the Javadoc. A very interesting further read can be seen here in this blog post, which compares the general notion of null and how null is handled in Java, SML, and Ceylon: http://blog.informatech.cr/2013/04/10/java-optional-objects. “blank” and “initial” states were already known to Turing . One could also argue that the “neutral” or “zero” state was required in the Babbage Engine, which dates back to Ada of Lovelace in the 1800′s.     On the other hand, mathematicians also prefer to distinguish “nothing” from “the empty set”, which is “a set with nothing inside”. This compares well with “NONE” and “SOME”, as illustrated by the aforementioned Informatech blog post, and as implemented by Scala, for instance. Anyway, I’ve given Java’s Optional some thought. I’m really not sure if I’m going to like it, even if Java 9 would eventually add some syntactic sugar to the JLS, which would resemble that of Ceylon to leverage Optional on a language level. Since Java is so incredibly backwards-compatible, none of the existing APIs will be retrofitted to return Optional, e.g, the following isn’t going to surface the JDK 8: public interface List<E> { Optional<E> get(int index); [...] } Not only can we assign null to an Optional variable, but the absence of “Optional” doesn’t guarantee the semantics of “SOME”, as lists will still return “naked” null values. When we mix the two ways of thinking, we will wind up with two checks, instead of one Optional<T> optional = // [...] T nonOptional = list.get(index);// If we're paranoid, we'll double-check! if (optional != null && optional.isPresent()) { // do stuff }// Here we probably can't trust the value if (nonOptional != null) { // do stuff } Hence… -1 from me to Java’s solution Further reading Of course, this has been discussed millions of times before. So here are a couple of links:No more excuses to use null references in Java 8 The Java Posse User Group Lambda Dev Mailing List (Optional != @Nullable) Lambda Dev Mailing List (Optional class is just a Value)  Reference: On Java 8’s introduction of Optional from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog. ...
spring-interview-questions-answers

Spring JpaRepository Example (In-Memory)

This post describes a simple Spring JpaRepository example using an in memory HSQL database. The code example is available from GitHub in the Spring-JpaRepository directory. It is based on the Spring-MVC-With-Annotations example and information available here. JPA Repository We implement a dummy bean for this example:       @Entity @AutoProperty public class SomeItem {@Id @GeneratedValue(strategy=GenerationType.AUTO) private long Id;private String someText;/* ...Setters & Getters */} and the corresponding JpaRepository: @Transactional public interface SomeItemRepository extends JpaRepository<SomeItem, Long> {} Service & Controller Next, we implement a service where our repository will be injected. We also populate the repository with dummy data: @Service @Repository public class SomeItemService {@Autowired private SomeItemRepository someItemRepository;@PostConstruct @Transactional public void populate() {SomeItem si = new SomeItem(); si.setSomeText("aaa"); someItemRepository.saveAndFlush(si);si = new SomeItem(); si.setSomeText("bbb"); someItemRepository.saveAndFlush(si);si = new SomeItem(); si.setSomeText("ccc"); someItemRepository.saveAndFlush(si);}@Transactional(readOnly=true) public List<SomeItem> getAll() {return someItemRepository.findAll();}@SuppressWarnings("AssignmentToMethodParameter") @Transactional public SomeItem saveAndFlush(SomeItem si) {if ( si != null ) { si = someItemRepository.saveAndFlush(si); }return si;}@Transactional public void delete(long id) {someItemRepository.delete(id);}} and a controller: @Controller public class MyController {@Autowired private SomeItemService someItemService;@RequestMapping(value = "/") public ModelAndView index() {ModelAndView result = new ModelAndView("index"); result.addObject("items", this.someItemService.getAll());return result;}@RequestMapping(value = "/delete/{id}") public String delete( @PathVariable(value="id") String id) {this.someItemService.delete(Long.parseLong(id));return "redirect:/";}@RequestMapping(value = "/create") @SuppressWarnings("AssignmentToMethodParameter") public String add() {SomeItem si = new SomeItem(); si.setSomeText("Time is: " + System.currentTimeMillis());this.someItemService.saveAndFlush(si);return "redirect:/";}} JPA Configuration On top of creating an entity manager based on an in-memeory instance of HSQL database, we enable JPA repositories with the @EnableJpaRepositories annotation: @Configuration @EnableJpaRepositories(basePackages={"com.jverstry"}) @EnableTransactionManagement public class JpaConfig implements DisposableBean {private EmbeddedDatabase ed;@Bean(name="hsqlInMemory") public EmbeddedDatabase hsqlInMemory() {if ( this.ed == null ) { EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder(); this.ed = builder.setType(EmbeddedDatabaseType.HSQL).build(); }return this.ed;}@Bean public LocalContainerEntityManagerFactoryBean entityManagerFactory(){LocalContainerEntityManagerFactoryBean lcemfb = new LocalContainerEntityManagerFactoryBean();lcemfb.setDataSource(this.hsqlInMemory()); lcemfb.setPackagesToScan(new String[] {"com.jverstry"});lcemfb.setPersistenceUnitName("MyPU");HibernateJpaVendorAdapter va = new HibernateJpaVendorAdapter(); lcemfb.setJpaVendorAdapter(va);Properties ps = new Properties(); ps.put("hibernate.dialect", "org.hibernate.dialect.HSQLDialect"); ps.put("hibernate.hbm2ddl.auto", "create"); lcemfb.setJpaProperties(ps);lcemfb.afterPropertiesSet();return lcemfb;}@Bean public PlatformTransactionManager transactionManager(){JpaTransactionManager tm = new JpaTransactionManager();tm.setEntityManagerFactory( this.entityManagerFactory().getObject() );return tm;}@Bean public PersistenceExceptionTranslationPostProcessor exceptionTranslation(){ return new PersistenceExceptionTranslationPostProcessor(); }@Override public void destroy() {if ( this.ed != null ) { this.ed.shutdown(); }}} The JSP Page We create a simple page to list existing items with a delete link, and the possibility to create new items: Running The Example One can run it using the maven tomcat:run goal. Then, browse: http://localhost:9191/spring-jparepository/   Reference: Spring JpaRepository Example (In-Memory) from our JCG partner Jerome Versrynge at the Technical Notes blog. ...
java-logo

SuperMan bound by Java Monitors

Its a dark time in the life of Super Man. Jor-El wants him to go on a voyage to prepare him for his ultimate destiny. Yet the Earth is faced with dooms-day and the Justice League needs their Man of Steel in action to save the world. But you cant do both at the same time since we have just one SuperMan. Also he cannot fight dooms day without first fulfilling his destiny and realizing his true powers. How do we call upon Superman without making the man go bonkers on what to do. This should be done in an orderly manner where one has to wait until the voyage is done. We will make use of Java Monitors to help SuperMan listen to his Kryptonian father as well as come back in time to save the world from dooms day. First of all we define the Man of Steel;     /** * The awesome kryptonian man is represented by this class * * @author Dinuka Arseculeratne * */ public class SuperMan {private boolean onVoyage = false;/** * Schedule a voyage for Superman. Note that this method first checks whether he is * already on a voyage, and if so calls the wait() method to hault the current thread * until notify is called and onVoyage is set to false. */ public synchronized void goOnVoyage() {if (onVoyage) { try { System.out.println("SuperMan is already on a voyage. Please wait until he returns from his quest."); wait(); System.out.println("His goyage is over, time for him to go on a new voyage...."); } catch (InterruptedException e) { System.out.println(" I am SuperMan, i do not handle these petty exceptions"); }} onVoyage = true; notify();}/** * This method calls Superman back from his current voyage. Again the method * checks whether Super man is not already on a voyage. If so the current thread is * Halted until he is schedule to go on a voyage because he needs to be on a voyage * to be called back in the first place. */ public synchronized void returnFromVoyage() {if (!onVoyage) { try { System.out.println("SuperMan is not yet on a voyage. Please Wait."); wait(); System.out.println("Great he has gone on a voyage, time to call him back!!"); } catch (InterruptedException e) { System.out.println(" I am SuperMan, i do not handle these petty exceptions"); } } onVoyage = false; notify(); } }So we have defined SuperMan. Note that he has two methods defined. One which allows him to go on a voyage and another to call him back from his current voyage. As you can see SuperMan does not handle exceptions because, well………. He is SuperMan and he is the exception. You can see that before each call we check the boolean indicating whether he is on a voyage or not and depending on the method called, the wait() of the Object is called in order to halt the current thread that is calling the method until notify() is called by the thread that is currently operating on the object. Note that wait() and notify() should be called inside a synchronized method or block for it to work accurately. Because you first need to acquire a lock in order to halt or notify it. Getting back to the previous issue, we know that both the Justice League and Jor-El need SuperMan but for different purposes. Lets see how this battle unravels with the following code snippet; public class Test {public static void main(String[] args) { SuperMan superMan = new SuperMan();JusticeLeague justiceLeague = new JusticeLeague(superMan); justiceLeague.start();JorEl jorEl = new JorEl(superMan); jorEl.start();}}class JusticeLeague extends Thread{private SuperMan superMan = null;public JusticeLeague(SuperMan superMan) { this.superMan = superMan; }@Override public void run() { superMan.returnFromVoyage(); } }class JorEl extends Thread{private SuperMan superMan = null; public JorEl(SuperMan superMan) { this.superMan = superMan; }@Override public void run() { superMan.goOnVoyage(); }} Note that here we have JorEl and the JusticeLeagure operating on two different threads trying to access SuperMan concurrently. As you can see from our main method, the JusticeLeague wants to call back SuperMan in order to save the world. But fortunately he is not yet on a voyage so its illegal to ask him to return. Then comes JorEl asking his son to go on a voyage to fulfill his true destiny. It is only after this voyage that he can return to save planet Earth. If you run this now you can see that the JusticeLeague thread is halted until SuperMan goes on the voyage and notify is called. Just for fun try to comment out the notify() method and you will see the application will hang because now one thread will wait indefinitely until it is notified of the completion of the process. If not for Java Monitors, SuperMan would have failed since he would have gone to face doomsday without first going on his voyage and fulfilling his destiny. And Java saves the world again. Note : The story is fictional yet Java Monitors are real   Reference: SuperMan bound by Java Monitors from our JCG partner Dinuka Arseculeratne at the My Journey Through IT blog. ...
apache-hadoop-hdfs-logo

How Hadoop Works? HDFS case study

The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. The Hadoop library contains two major components HDFS and MapReduce, in this post we will go inside each HDFS part and discover how it works internally.     HDFS has a master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients. In addition, there are a number of DataNodes, usually one per node in the cluster, which manage storage attached to the nodes that they run on. HDFS exposes a file system namespace and allows user data to be stored in files. Internally, a file is split into one or more blocks and these blocks are stored in a set of DataNodes. The NameNode executes file system namespace operations like opening, closing, and renaming files and directories. It also determines the mapping of blocks to DataNodes. The DataNodes are responsible for serving read and write requests from the file system’s clients. The DataNodes also perform block creation, deletion, and replication upon instruction from the NameNode.HDFS analysis After the analysis of the Hadoop with JArchitect, here’s the dependency graph of the hdfs project.To achieve its job, hdfs uses many third party libs like guava, jetty, jackson and others. The DSM (Design Structure Matrix) give us more info about the weight of using each lib.HDFS use mostly rt, hadoop-common and protobuf libraries. When external libs are used, it’s better to check if we can easily change a third party lib by another one without impacting the whole application, there are many reasons that can encourage us to change a third party lib. The other lib could:Have more features More performent More secureLet’s take the example of jetty lib and search which methods from hdfs use it directly. from m in Methods where m.IsUsing (“jetty-6.1.26″) && m.ParentProject.Name==”hadoop-hdfs-0.23.6″ select new {m, m.NbBCInstructions}Only few methods use directly jetty lib, and changing it with another one will be very easy. In general it’s very interesting to isolate when you can the using of an external lib in only some classes, it can help to maintain and evolve the project easily. Let’s discover now the major HDFS components: I-DataNode Startup To discover how to launch a data node, let’s search before all entry points of the hdfs jar. from m in Methods where m.Name.Contains(“main(String[])”) && m.IsStatic select new {m, m.NbBCInstructions}hdfs has many entries like DFSAdmin, DfSsc, Balancer and HDFSConcat. For the data node the entry point concerned is the DataNode class, and here’s what happen when its main method is invoked.The main method invoke first securemain and pass the param securityresources to it, when the node is started in a not secure cluster this param is null, however in the case of starting it in a secure environment, the param is assigned with the secure resources. The SecureResources class contains two attributes:streamingSocket: secure port for data streaming to datanode. listner: a secure listener for the web server.And here are the methods invoked from DataNode.StartDataNode.This method initialize IPCServer,DataXceiver which is the thread for processing incoming/outgoing data stream, create data node metrics instance. How data is managed? The DataNode class has an attribute named data of type FSDatasetinterface. FSDatasetinterface is an interface for the underlying storage that stores blocks for a data node. Let’s search which implementations are available in Hadoop. from t in Types where t.Implement (“org.apache.hadoop.hdfs.server.datanode.FSDatasetInterface”) select new {t, t.NbBCInstructions}Hadoop provides FSDataset which manages a set of data blocks and store them on dirs. Using interfaces enforce low coupling and makes the design very flexible, however if the implementation is used instead of the interface we lose this advantage, and to check if interfaceDataSet is used anywhere to represent the data, let’s search for all methods using FSDataSet. from m in Methods where m.IsUsing (“org.apache.hadoop.hdfs.server.datanode.FSDataset”) select new {m, m.NbBCInstructions}Only FSDataSet inner classes use it directly, and for all the other places the interfaceDataSet is used instead, what makes the possibility to change the dataset kind very easy. But how can I change the interfaceDataSet and give my own implementation? For that let’s search where the FSDataSet is created. from m in Methods let depth0 = m.DepthOfCreateA(“org.apache.hadoop.hdfs.server.datanode.FSDataset”) where depth0 == 1 select new {m, depth0}The factory pattern is used to create the instance; the problem is if this factory create the implementation directly inside getFactory method, we have to change the Hadoop code to give it our custom DataSet manager. Let’s discover which methods are used by the getFactory method. from m in Methods where m.IsUsedBy (“org.apache.hadoop.hdfs.server.datanode.FSDatasetInterface$Factory.getFactory(Configuration)”) select new {m, m.NbBCInstructions}The good news is that the factory uses the Configuration to get the class implementation, so we can only by configuration gives our custom DataSet, we can also search for all classes that can be given by configuration. from m in Methods where m.IsUsing (“org.apache.hadoop.conf.Configuration.getClass(String,Class,Class)”) select new {m, m.NbBCInstructions}Many classes could be injected inside the Hadoop framework without changing its source code, what makes it very flexible. NameNode The NameNode is the arbitrator and repository for all HDFS metadata. The system is designed in such a way that user data never flows through the NameNode. Here are some methods invoked when the name node is launched.The RPC Server is created, and the fsnamesystem is loaded, here’s a quick look to these two components: NameNodeRpcServer NameNodeRpcServer is responsible for handling all of the RPC calls to the NameNode. For example when a data node is launched, it must register itself with the NameNode, the rpc server receive this request and forward it to fsnamesystem, which redirect it to dataNodeManager.Another example is when a block of data is received. from m in Methods where m.IsUsedBy (“org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReceived(DatanodeRegistration,String,Block[],String[])”) select new {m, m.NbBCInstructions}Each rectangle in the graph is proportional to the number of bytes of code instructions, and we can observe the BlockManager.addBlock do the most of the job. What’s interesting with Haddop is that each class has a specific responsibility, and any request is redirected to the corresponding manager. FSnamesystem HDFS supports a traditional hierarchical file organization. A user or an application can create directories and store files inside these directories. The file system namespace hierarchy is similar to most other existing file systems; one can create and remove files, move a file from one directory to another, or rename a file. For example here’s a dependency graph concerning the creation of a symbolic link.HDFS Client DFSClient can connect to a Hadoop Filesystem and perform basic file tasks. It uses the ClientProtocol to communicate with a NameNode daemon, and connects directly to DataNodes to read/write block data. Hadoop DFS users should obtain an instance of DistributedFileSystem, which uses DFSClient to handle filesystem tasks. DistributedFileSystem act as facade and redirect requests to the DFSClient class, here’s the dependency graph concerning the creation of a directory request.Conclusion: Using frameworks as user is very interesting, but going inside this framework could give us more info suitable to understand it better, and adapt it to our needs easily. Hadoop is a powerful framework used by many companies, and most of them need to customize it, fortunately Hadoop is very flexible and permit us to change the behavior without changing the source code. ...
mongodb-logo

Install, setup and start MongoDB on Windows

This post will provide the full path from downloading required binary archive/package for particular Windows version to starting up MongoDB in various ways. Following are the high level steps:Download the MongoDB binary archive Extract MongoDB archive Setup up configuration parameters and start/stop MongoDBusing command line using windows services  Download the MongoDB binary archive For Windows platform, MongoDB distributes zip archive. Go to following downloads page from browser http://www.mongodb.org/downloads. Depends on system architecture, it comes in two distribution as:32-bit 64-bitAgain, MongoDB distribution for Windows 64-bit ships with two flavours:one for Windows server 2008 and Windows 7, Server 2012 (download link “*2008R2+” ) other for rest of 64-bit Windows OS.This distinction for x64 is made based on newer OS features which helps in enhanced performance of MongoDB. Choose the production releases for downloadingAfter you download, you will get zip archive named like mongodb-<platform>-<architecture>-<version>.zip Extract MongoDB archive Once we have MongoDB archive, go ahead and extract archive using any zip extract program. After extracting, you will get the directories inside archive as follows:here , bin directory contains the binaries in form of executables , such as mongod.exe, mongo.exe, monogexport.exe etc. Setup up configuration parameters and start/stop MongoDB For starting and stopping the mongodb server, we need only the bin\mongod.exe, which is the daemon process executable for MongoDB. In short, it is the executable which drives up the MongoDB in general For starting up, we need to provide the parameters for executable, which i call it here as config parameters or params. We can setup the config parameters using the two waysUsing command line options or Using config fileUsing command line options With use of these command line options, we configure mongo daemon process. Basically, there lots of options we can specify but i will  give only those which required for this tutorial. Following are some of them: –dbpath <path> : the existent directory path, which is required to store data files. this is most important option we need to specify, Note that, the directory path you are providing should exists otherwise process won’t start. If this path contains spaces then put all its path in double qoutes. e.g. –dbpath “c:\Program Files” –logpath <log-file-path>: the existent file path, used by mongo daemon process,for flush out loggs instead of in standard console. If this path contains spaces then put all its path in double qoutes –port <port> :  port no. where mongod process listen for connection from client, it defaults to 27017 if not specified Note : While using the command prompt on some Windows OS like windows 7 or Windows server 2008, Run it with administrator privileges as shown as followsUse the following commands to start the server process Change to bin directory > I:\>cd Servers\mongodb\bin now type following command to start the mongod process > mongod --dbpath I:\Servers\data --port 27017 While starting, Windows firewall may block the process as shown as followsClick “Allow access“ to proceed. After successful execution of command , it will show logging info in standard console itself  as shown follows: > I:\Servers\mongodb\bin>mongod --dbpath I:\Servers\data --port 27017Tue Apr 09 22:49:13 [initandlisten] MongoDB starting : pid=4380 port=27017 dbpath=I:\Servers\data 64-bit host=Myi-PCTue Apr 09 22:49:13 [initandlisten] db version v2.2.1, pdfile version 4.5Tue Apr 09 22:49:13 [initandlisten] git version: d6764bf8dfe0685521b8bc7b98fd1fab8cfeb5aeTue Apr 09 22:49:13 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1')BOOST_LIB_VERSION=1_49Tue Apr 09 22:49:13 [initandlisten] options: { dbpath: "I:\Servers\data", port: 27017 }Tue Apr 09 22:49:13 [initandlisten] journal dir=I:/Servers/data/journalTue Apr 09 22:49:13 [initandlisten] recover : no journal files present, no recovery neededTue Apr 09 22:49:13 [initandlisten] waiting for connections on port 27017Tue Apr 09 22:49:13 [websvr] admin web console waiting for connections on port 28017</em> If you specify the logpath option, then logging will direct to that log file instead of showing up on standard console > mongod --dbpath I:\Servers\data --port 27017 --logpath I:\Servers\logs\mongod.log all output going to: I:\Servers\logs\mongod.log and prompt will wait there and you can find all the logs at specified log file location. You can stop this process with use of  keys Ctrl+C  or  Ctrl +D from keyboard. Using the config file Instead of specifying command line option, we can specify same with use of file, which i call it here as config file. Config file is just normal file, containing the parameters in the key=value form and each is on the every line of file. In this, we basically provide path to file (which contains the configurations) as command line option as “-f” or “–config”. Following is the snippet for the config file: #This is an example config file for MongoDB #basic dbpath = I:\Servers\mongodb\data port = 27017 logpath = I:\Servers\mongodb\logs\mongo.log You can save this file with any extension, but specify full path with extension, while stating process as shown in following commands. From command prompt, you will use either of following: > mongod -f  I:\Servers\mongodb\config\mongodb.conf or > mongod --config I:\Servers\mongodb\config\mongodb.conf Start/Stop MongoDB using the Windows services Support for installing mongod server as service comes out of the box. Mongodb daemon executable provides amazing support for the installation of services using few command line parameters without using additional components for this purpose Just we need set the few command line params and we are way to go and they are as follows. Following are required parameters: –install : command line switch to install the service –remove : command line switch to remove the service –serviceName <name> :  the name for mongod windows service and must adhere to naming services in windows like only accepting the alphanumeric chars with no spaces –serviceDisplayName <display-name> : Display name for service that is shown in services console, put this in double quotes if contains the spaces –serviceDescription  <description> : Small description about service, put this in double quotes if contains the spaces While installing as service we must provide log file path as counterpart to starting it from command line, because while starting service we don’t as standard console.I will be using the config file for some configurations > mongod -f "I:\Servers\mongodb\config\mongodb.conf" --install --serviceName mdb27017 --serviceDisplayName "MongoDB Server Instance 27017" --serviceDescription "MongoDB Server Instance running on 27017" In log path specified, you can check for the whether windows services started or not. Above will install the mongodb as Windows service, check Services console using services.mscNow we can start or stop MongoDB using Windows services console as shown above. You can remove the service using following: > mongod -f "I:\Servers\mongodb\config\mongodb.conf" --remove--serviceName mdb27017 --serviceDisplayName "MongoDB Server Instance 27017" --serviceDescription "MongoDB Server Instance running on 27017"   Reference: Install, setup and start MongoDB on Windows from our JCG partner Abhijeet Sutar at the Another Java Duke blog. ...
java-interview-questions-answers

There is no application server

We have recently posted data about application server market share we gathered from the free Plumbr deployments. And it resonated well – via different channels we got hundreds of comments and opinions on how to interpret the data. But one of the arguments in its different forms was constantly being made through every channel. Whether it took the form of “Tomcat is not an application server” or “This data is irrelevant as it is not focused on real application servers such as Weblogic or WebSphere”, it just kept surfacing. It made us wonder – why does Java community have so different opinion about what actually is an application server. So we decided to shed some light upon the issue. Looking into the most obvious source – namely Wikipedia – and things did not look too bad:   An application server can be either a software framework that provides a generalized approach to creating an application-server implementation, without regard to what the application functions are, or the server portion of a specific implementation instance. In either case, the server’s function is dedicated to the efficient execution of procedures (programs, routines, scripts) for supporting its applied applications. So far, so good. Apparently anything can be an application server based on the Wikipedia definition. But when we tried to find an official definition for a Java EE application server, things got a bit more interesting. If you dig under the hood of the Java EE specification, you discover that neither Sun back in the days nor Oracle is using the term “Application server” in the official specifications. Instead, the term “Container” is used throughout the materials. The containers must support different specifications, such as JMS, JTA, JSP to warrant that the applications are portable across different implementations. Next important fact – until Java EE 5 the only way Sun/Oracle acknowledged your product to be officially Java EE compliant was to implement the whole specification. This led to large and monolithic “enterprise grade” products, such as the infamous Websphere and Weblogic in mid-2000’s. As a result, more and more people flocked away from the close-to-impossible to use beasts and started using something a bit more humane such as Tomcat or Jetty. Surprisingly the vendors of those products could not care less for the official specification, but instead focused on providing good tools for the job at hand. So the specification committee finally surrendered and broke down the specification. This breakdown in Java EE 6 is known as Java EE profiles. Java EE 6 specification makes it possible for container vendors to choose whether they wish to implement a subset of the specification to get Web Profile certification or aim for the Full Profile and implement all the specifications. The specifications mandatory in corresponding profiles are described in the following table:Full Java EE 6.0 implementationWeb ProfileServlet 3.0 JSP 2.2 EL 2.2 EJB 3.1 JMS 1.1 JavaMail 1.1JSR-45 1.0 JSTL 1.2 JSRF 2.0 Connector 1.6 WebServices 1.3 JAX-RPC 1.1Common Annotations 1.1 EJB 3.1 Lite JTA 1.1 JAX-WS 2.2 JAX-RS 1.1 JAXB 2.2JPA 2.0 Bean Validation 1.0 Managed Beans 1.0 JAXR 1.0 Java EE Management 1.1 Java EE Deployment 1.2Interceptors 1.0 JSR 299 1.0 Dependency Injection 1.0 JACC 1.4 JASPIC 1.0 WebServicesMetadata 2.1So if you desire to build your very own full Java EE implementation, you’d better be provide implementations to all the 30 acronyms in this list. As this is by no means cheap or easy task, then at the time of posting this article, only the following application servers were officially certified by Oracle on Java EE 6:Full ProfileOracle Glassfish 3IBM WebShpere 8IBM WebSphere CE 3Oracle WeblogicJBoss AS 7Apache Geronimo 3Hitachi uCosminexus Application Server 9Fujitsu Interstage Application Server 10TMAX JEUS 7Web ProfileOracle Glassfish 3Caucho Resin 4Apache TomEE 1JBoss AS 7Apache Geronimo 3Sap NetWeaverJOnAS    If we now compare this list against the most popular application servers, we find that 66% of our user base is happily running on products such as Tomcat and Jetty which are nowhere in sight in the officially certified container list. But we still think both of them make an excellent and easy to use platform for your applications. So – formally there is no such thing as Java EE Application Server. Instead of this we have Java EE containers, which if the vendor desires can apply for official certification in any of the profiles. And you can definitely have your own cute application server implementing just Servlet specification and leaving out everything else. You are just not eligible to official certification in this case.   Reference: There is no application server from our JCG partner Nikita Salnikov-Tarnovski at the Plumbr Blog blog. ...
software-development-2-logo

How deep is your code?

Dependency tuples. Picture your code. Picture all those functions on which there are no source-code dependencies. That might sound odd: if there are no source-code dependencies on a function then what is its purpose? Well, we must distinguish between compile-time dependencies and run-time dependencies. True, all entities at run-time must be called by other run-time entities: it’s turtles all the way down (or at least all the way to freshly powered-up hardware and its firmware-initialized instruction pointer). In the hyper-modern weirdness of compile-time, however, the source-code dependency is king and two types of function indignantly shun all source-code dependencies. The   first type includes those functions called from outside your source-code. The main() function offers a good example: you write nothing that explicitly calls your main() function, rather the environment smashes into your source-code to find it. The second type includes those functions which implement a signature declared in an interface somewhere else. Clients of such functions seldom if ever call them directly, opting instead to call them via their interface declarations. Yes, the system puts two and two together at run-time to establish what to execute, but at compile-time your system’s flexibility depends on these functions’ being called only via associated interfaces. The source-code dependencies fall on the interface signature, the implementing functions escaping the downpour. (This second type also includes functions inherited from a superclass and called via that superclass but we shall mainly concern ourselves with interfaces here.) So, back to our visualization. Consider a toy system composed of a single class, Wonder. This class has three functions: Java’s static main() function which instantiates the Wonder object by calling the constructor; the constructor itself, Wonder(); and function a() which is called by the constructor, see figure 1.How many source-code dependency tuples are there in figure 1? There is one: {main(), Wonder(), a()}. A tuple is an just ordered set of elements. A source-code dependency tuple, on function-level, is just an ordered set of source-code function dependencies. If these look like plain-old call-paths that is because the two concepts share DNA. Call paths, however, burst to life only at run-time, magical manifestations of conditional execution sequences that leap from implementation to implementation, oblivious to interface. Source-code tuples hibernate in the file-system permafrost, frozen – sometimes for years – between updates. They terminate syntactically on any function that in turn has no source-code dependencies, be it implementation or interface declaration. Given that the structure of a program on function-level is simply the enumeration of its functions and their inter-relationships, we can say that the function-level structure of a program is in some sense the union of all its function-level dependency tuples. Some further examples may be helpful.Figure 2 shows a slightly expanded Wonder class, now with the constructor calling three other functions. There are now three dependency tuples: {main(), Wonder(), a()}, {main(), Wonder(), b()} and {main(), Wonder(), c()}.Figure 3 has four dependency tuples. All cosmically interesting, of course, but what has this to do with how, “Deep,” code is? Code depth. The depth of a dependency tuple is simply its cardinality, that is, the number of elements it contains. So in figure 3, the tuple {main(), Wonder(), c(), d()} has a depth of four. The depth of a program, then, is the average depth of all its dependency tuples. Thus figure 3 shows a system with a depth of 3.5. Given that a change to a function has a greater probability of rippling back to dependent functions than rippling forward to independent ones, depth interests programmers because the deeper a dependency tuple, the more functions are potentially impacted by changes to that tuple. In figure 3, function e() has three transitively-dependent functions: c(), Wonder() and main(), whereas a() in the shorter tuple has just two. Code depth, of course, hardly claims to be the only or even the most-important driver of a program’s structure, it merely elbows its way into the rabble of competing influences; but programmers should not ignore it. Let us take a glimpse at the best and worst configurations of our Wonder system, from depth’s perspective.Figure 4 shows the depth-wise ideal configuration of our Wonder functions, that is, the shallowest-possible configuration: the sunburst. (It would be shallower still if we were prepared to tolerate main() function’s calling all others statically.) No function here has more than two functions transitively-dependent on it.Figure 5, on the other hand, shows the worst configuration for our Wonder system: a single tuple, maximally deep. Five functions here have more than two functions transitively dependent on them. Again, depth remains one of many influencing factors and programmers certainly do not enjoy the freedom to configure their functions any way they wish; at the very least, semantic constraints demand a logical decomposition. So how deep is code? Do programmers keep to the shallows? Analysis revisited. Those who have sampled so pitifully small a set as that comprising Junit and Ant should neither draw firm conclusion nor withhold speculation. Figure 6 shows the function-dependency tuple depth of each release of JUnit from version 3.6 to version 4.9.This graphs shows that JUnit maintained an average tuple depth of around 5.5, that is, most functions could expect to reside within a tuple of 5.5 functions (the standard deviation was around three). This hardly seems excessive. Few programmers would find the complexity of such tuples overwhelming. Compare this, however, with Ant’s historical trajectory.Figure 7 shows Ant’s depth which in version 1.6 has soared to almost twenty (with a standard deviation of around eight). Many programmers might find this suspicious. They might question why most of the functions should find themselves in such long and potentially-ripply tuples. Perhaps more projects should employ depth-guages. Summary. Into the pachinko machine of our program we pour the metal balls of our analytical thoughts, watch them bobble and bounce as they fall through webs of intricate dependency tuples and see them collect in the tray with feelings of either satisfaction (“Those functions are structured precisely as they ought to be”) or bewilderment (“That function is connected to what now?”).   Reference: How deep is your code? from our JCG partner Edmund Kirwan at the A blog about software. blog. ...
oracle-glassfish-logo

JavaEE 7 with GlassFish on Eclipse Juno

Java EE 7 is hot. The first four JSRs passed the final approval ballot recently and GlassFish 4 reached promoted build 83 in the meantime. If you are following my blog you know, that I do most of my work with NetBeans. But I indeed recognize, that there are other IDE users out there which also have a valid right of also testdriving the latest and greatest in enterprise Java. The GlassFish Eclipse Plugins The starting place for Eclipse are the GlassFish Eclipse plugins. They moved into the Oracle Enterprise Pack for Eclipse (OEPE) project a while back and are still there to be installed and configured separately. The easiest way to get them is to use the   pre-packaged OEPE bundle. Simply download the suitable version and get started. If you already have you favorite Java EE Eclipse version you can also use the java.net update site for Eclipse Juno. The OEPE package contains oficial releases (more stable, tested) of GF plugins and new releases come one or two times per year. The update sites on java.net contain developer builds that are released as needed, typically a lot more often then OEPE. You can download from whatever meets your needs. Install the PluginThis works as expected. If you stick to the update site you simply go to Preferences->Install/Update->Available Software Sites and make sure that the above mentioned site is defined and checked. Install the GlassFish Tools and the Java EE 6 and/or Java EE 7 documentation and sources according to your needs. Click next two times, read through the license and check accept. Click Finish to install. The download gets everything in place and you have to finish the installation with a re-start. Starting a new Java EE 7 Project Once that it done you can start with configuring your GlassFish 4.0 domain. The simplest way is to create a New Project > Other > Web > New Dynamic Web Project and select the “New Runtime” button next to target runtime. The New Server Runtime Environment dialogue pops up and you can select “GlassFish 4.0″ from the GlassFish folder. Make sure to select a Java SE 7 JDK and the appropriate GlassFish Server Directory to use (or even install). In this example I am using the latest promoted build 83 freshly downloaded from the GlassFish website. Click Finish. Now add a simple servlet which does nothing spectacular but use some Java API for Processing JSON to write a simple JSON string. protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("application/json"); PrintWriter out = response.getWriter();JsonObjectBuilder builder = Json.createObjectBuilder(); builder.add( "person", Json.createObjectBuilder().add("firstName", "Markus") .add("lastName", "Eisele")); JsonObject result = builder.build(); StringWriter sw = new StringWriter(); try (JsonWriter writer = Json.createWriter(sw)) { writer.writeObject(result); } out.print(sw.toString()); }Right click the project and select “Run as ..” > “Run on Server” > GlassFish 4.0. Now point your browser to localhost and see the result working. The server view gives you the well know overview about your instance. And there you go. Have fun doing your Java EE 7 developments with Eclipse   Reference: JavaEE 7 with GlassFish on Eclipse Juno from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog. ...
akka-logo

WatchService combined with Akka actors

WatchService is a handy class that can notify you about any file system changes (create/update/delete of file) in a given set of directories. It is described nicely in the official documentation so I won’t write another introduction tutorial. Instead we will try to combine it with Akka to provide fully asynchronous, non-blocking file system changes notification mechanism. And we will scale it both to multiple directories and multiple… servers! Just for starters here is a simple, self-descriptive example:           val watchService = FileSystems.getDefault.newWatchService() Paths.get("/foo/bar").register(watchService, ENTRY_CREATE, ENTRY_DELETE) while(true) { val key = watchService.take() key.pollEvents() foreach { event => event.kind() match { case ENTRY_CREATE => //... case ENTRY_DELETE => //... case x => logger.warn(s"Unknown event $x") } } key.reset() } I know java.nio stands for “New I/O” and not for “Non-blocking I/O” but one might expect such a class to work asynchronously. Instead we have to sacrifice one thread, use awkward while(true) loop and block on watchService.take(). Maybe that’s how the underlying operating systems works (luckily WatchService uses native OS API when available)? Doesn’t matter, we have to live with that. Fortunately one WatchService can monitor arbitrary number of paths, thus we need only one thread per whole application, not per directory. So, let’s wrap it up in a Runnable: class WatchServiceTask2(notifyActor: ActorRef) extends Runnable with Logging { private val watchService = FileSystems.getDefault.newWatchService() def run() { try { while (!Thread.currentThread().isInterrupted) { val key = watchService.take() //coming soon... key.reset() } } catch { case e: InterruptedException => logger.info("Interrupting, bye!") } finally { watchService.close() } } } This is the skeletal implementation of any Runnable that waits/blocks I want you to follow. Check Thread.isInterrupted() and escape main loop when InterruptedException occurs. This way you can later safely shut down your thread by calling Thread.interrupt() without any delay. Two things to notice: we require notifyActor reference in a constructor (will be needed later, I hope you know why) and we don’t monitor any directories, yet. Luckily we can add monitored directories at any time (but we can never remove them afterwards, API limitation?!) There is one issue, however: WatchService only monitors given directory, but not subdirectories (it is not recursive). Fortunately another new kid on the JDK block, Files.walkFileTree(), releases us from tedious recursive algorithm: def watchRecursively(root: Path) { watch(root) Files.walkFileTree(root, new SimpleFileVisitor[Path] { override def preVisitDirectory(dir: Path, attrs: BasicFileAttributes) = { watch(dir) FileVisitResult.CONTINUE } }) } private def watch(path: Path) = path.register(watchService, ENTRY_CREATE, ENTRY_DELETE) See how nicely we can traverse the whole directory tree using flat FileVisitor? Now the last piece of the puzzle is the body of loop above (you will find full source code on GitHub): key.pollEvents() foreach { event => val relativePath = event.context().asInstanceOf[Path] val path = key.watchable().asInstanceOf[Path].resolve(relativePath) event.kind() match { case ENTRY_CREATE => if (path.toFile.isDirectory) { watchRecursively(path) } notifyActor ! Created(path.toFile) case ENTRY_DELETE => notifyActor ! Deleted(path.toFile) case x => logger.warn(s"Unknown event $x") } } When a new file system entry is created and it happens to be a directory, we start monitoring that directory as well. This way if, for example, we start monitoring /tmp, every single subdirectory is monitored as well, both existing during startup and newly created one. Message classes are pretty straightforward. You might argue that CreatedFile and CreatedDirectory separate classes might have been a better idea – depends on your use case, this was simpler from this article’s perspective: sealed trait FileSystemChange case class Created(fileOrDir: File) extends FileSystemChange case class Deleted(fileOrDir: File) extends FileSystemChange case class MonitorDir(path: Path) The last MonitorDir message will be used in just a second. Let’s wrap our Runnable task and encapsulate it inside an actor. I know how bad it looks to start a thread inside Akka actor, but Java API forces us to do so and it will be our secret that never escapes that particular actor: class FileSystemActor extends Actor { val log = Logging(context.system, this) val watchServiceTask = new WatchServiceTask(self) val watchThread = new Thread(watchServiceTask, "WatchService") override def preStart() { watchThread.setDaemon(true) watchThread.start() } override def postStop() { watchThread.interrupt() } def receive = LoggingReceive { case MonitorDir(path) => watchServiceTask watchRecursively path case Created(file) => //e.g. forward or broadcast to other actors case Deleted(fileOrDir) => } } Few things to keep in mind: actor takes full responsibility of the "WatchService" thread lifecycle. Also see how it handles the MonitorDir message. However we don’t monitor any directory from the beginning. This is done outside: val system = ActorSystem("WatchFsSystem") val fsActor = system.actorOf(Props[FileSystemActor], "fileSystem") fsActor ! MonitorDir(Paths get "/home/john") //... system.shutdown() Obviously you can send any number of MonitorDir messages with different directories and all of them are monitored simultaneously – but you don’t have to monitor subdirectories, this is done for you. Creating and deleting new file to smoke test our solution and apparently it works: received handled message MonitorDir(/home/john/tmp) received handled message Created(/home/john/tmp/test.txt) received handled message Deleted(/home/john/tmp/test.txt) There is one interested piece of functionality we get for free. If we run this application in a cluster and configure one actor to only be created on one of the instances (see: Remote actors – discovering Akka for thorough example how to configure remote actors), we can easily aggregate file system changes from multiple servers! Simply lookup remote (“singleton” across cluster) aggregate actor in FileSystemActor and forward events to it. Aforementioned article explains very similar architecutre so I won’t go into too much detail. Enough to say, with this topology one can easily monitor multiple servers and collect change events on all of them. So… we have a cool solution, let’s look for a problem. In a single-node setup FileSystemActor provides nice abstraction over blocking WatchService. Other actors interested in file system changes can register in FileSystemActor and respond quickly to changes. In multi-node, cluster setup it works pretty much the same, but now we can easily control several nodes. One idea would be to replicate files over nodes.   Reference: WatchService combined with Akka actors from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog. ...
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.

THANK YOU!

Close