What's New Here?

software-development-2-logo

Please, Run That Calculation in Your RDBMS

There’s one thing that you can do terribly wrong when working with RDBMS. And that thing is not running your calculations in the database, when you should. We’re not advocating to blindly move all business logic into the database, but when I see a Stack Overflow question like this, I feel the urge to gently remind you of the second item in our popular 10 Common Mistakes Java Developers Make When Writing SQL.        The Stack Overflow question essentially boils down to this (liberally quoted): From the following medium-sized table, I wish to count the number of documents with status 0 or 1 per application ID: AppID | DocID | DocStatus ------+-------+---------- 1 | 100 | 0 1 | 101 | 1 2 | 200 | 0 2 | 300 | 1 ... | ... | ... Should I use Hibernate for that? And the answer: NO! Don’t use Hibernate for that (unless you mean native querying). You should use SQL for that. Es-Queue-El! You have so many trivial options to make your SQL Server help you run this query in a fraction of the time it would take if you loaded all that data into Java memory before aggregating! For instance (using SQL Server): Using GROUP BY This is the most trivial one, but it might not return result in exactly the way you wanted, i.e. different aggregation results are in different rows: SELECT [AppID], [DocStatus], count(*) FROM [MyTable] GROUP BY [AppID], [DocStatus] Example on SQLFiddle, returning something like | APPID | DOCSTATUS | COLUMN_2 | |-------|-----------|----------| | 1 | 0 | 2 | | 2 | 0 | 3 | | 1 | 1 | 3 | | 2 | 1 | 2 | Using nested selects This is probably the solution that this particular user was looking for. They probably want each aggregation in a separate column, and one very generic way to achieve this is by using nested selects. Note that this solution might prove to be a bit slow in some databases that have a hard time optimising these things SELECT [AppID], (SELECT count(*) FROM [MyTable] [t2] WHERE [t1].[AppID] = [t2].[AppID] AND [DocStatus] = 0) [Status_0], (SELECT count(*) FROM [MyTable] [t2] WHERE [t1].[AppID] = [t2].[AppID] AND [DocStatus] = 1) [Status_1] FROM [MyTable] [t1] GROUP BY [AppID] Example on SQLFiddle, returning something like | APPID | STATUS_0 | STATUS_1 | |-------|----------|----------| | 1 | 2 | 3 | | 2 | 3 | 2 | Using SUM() This solution is probably the optimal one. It is equivalent to the previous one with nested selects, although it only works for simple queries, whereas the nested selects version is more versatile. SELECT [AppID], SUM(IIF([DocStatus] = 0, 1, 0)) [Status_0], SUM(IIF([DocStatus] = 1, 1, 0)) [Status_1] FROM [MyTable] [t1] GROUP BY [AppID] Example on SQLFiddle, same result as before Using PIVOT This solution is for the SQL Aficionados among yourselves. It uses the T-SQL PIVOT clause! SELECT [AppID], [0], [1] FROM ( SELECT [AppID], [DocStatus] FROM [MyTable] ) [t] PIVOT ( count([DocStatus]) FOR [DocStatus] IN ([0], [1]) ) [pvt] SQL aficionados use PIVOT Example on SQLFiddle, same result as before Conclusion You may freely choose your weapon among the above suggestions, and I’m sure there are more alternatives. All of them will outperform any Java-based aggregation implementation by orders of magnitude, even for trivially small data sets for sure. We’ll say this time and again, and we’ll quote Gavin King time and again for the same thing: Just because you’re using Hibernate, doesn’t mean you have to use it for everything. A point I’ve been making for about ten years now. And in our words: Use SQL whenever appropriate! And that is much more often than you might think!  Reference: Please, Run That Calculation in Your RDBMS from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog. ...
spring-logo

Tracking Exceptions With Spring – Part 2 – Delegate Pattern

In my last blog, I started to talk about the need to figure out whether or not your application is misbehaving in it’s production environment. I said that one method of monitoring your application is by checking its log files for exceptions and taking appropriate action if one is found. Obviously, log files can take up hundreds of megabytes of disk space and it’s impractical and really boring to monitor them by hand. I also said that there were several ways of automatically monitoring log files and proposed a Spring based utility that combs log files daily and sends you an email if / when it finds any exceptions. I only got as far as describing the first class: the FileLocator, which will search a directory and it’s sub-directories for log files. When it finds one, it passes it to the FileValidator.The FileValidator has to perform several checks on the file. Firstly, it has to determine whether the file is young enough to examine for exceptions. The idea is that as the application runs periodically, there’s no point in checking all the files found in the directory for errors, we only want those files that have been created or updated since the application last ran. The idea behind this design is to combine several implementations of the same interface, creating an aggregate object that’s responsible for validating files. The eagle-eyed reader will notice that this is an implementation of the Delegate Pattern.In the class diagram above instances of RegexValidator and FileAgeValidator are injected into the FileValidator and it delegates its validation tasks to these classes. Taking each of these in turn and dealing with the Validator interface first… public interface Validator {  /** The validation method */   public <T> boolean validate(T obj);} The code above demonstrates the simplicity of the Validator interface. It has a single method validate(T obj), which is a Generic Method call that increases the flexibility and re-usability of this interface. When classes implement this interface, they can change the input argument type to suit their own purposes… as demonstrated by the first implementation below: public class RegexValidator implements Validator {  private static final Logger logger = LoggerFactory.getLogger(RegexValidator.class);  private final Pattern pattern;  public RegexValidator(String regex) {     pattern = Pattern.compile(regex);     logger.info("loaded regex: {}", regex);   }  @Override   public <T> boolean validate(T string) {    boolean retVal = false;     Matcher matcher = pattern.matcher((String) string);     retVal = matcher.matches();     if (retVal && logger.isDebugEnabled()) {       logger.debug("Found error line: {}", string);     }    return retVal;   } } The RegexValidator class has a single argument constructor that takes a Regular Expression string. This is then converted to a Pattern instance variable and is used by the validate(T string) method to test whether or not the String input argument matches original constructor arg regular expression. If it does, then validate(T string) will return true. @Service public class FileAgeValidator implements Validator {  @Value("${max.days}")   private int maxDays;  /**    * Validate the age of the file.    *    * @see com.captaindebug.errortrack.Validator#validate(java.lang.Object)    */   @Override   public <T> boolean validate(T obj) {    File file = (File) obj;     Calendar fileDate = getFileDate(file);    Calendar ageLimit = getFileAgeLimit();    boolean retVal = false;     if (fileDate.after(ageLimit)) {       retVal = true;     }    return retVal;   }  private Calendar getFileAgeLimit() {    Calendar cal = Calendar.getInstance();     cal.add(Calendar.DAY_OF_MONTH, -1 * maxDays);     return cal;   }  private Calendar getFileDate(File file) {    long fileDate = file.lastModified();     Calendar when = Calendar.getInstance();     when.setTimeInMillis(fileDate);     return when;   }} The second Validator(T obj) implementation is the FileAgeValidator shown above and the first thing to note is that the whole thing is driven by the max.days property. This is injected into the FileAgeValidator’s @Value annotated maxDays instance variable. This variable determines the maximum age of the file in days. This the file is older than this value, then the validate(T obj) will return false. In this implementation, the validate(T obj) ‘obj’ argument is cast to a File object, which is then used to convert the date of the file into a Calendar object. The next line of code converts the maxDays variable into a second Calendar object: ageLimit. The ageLimit is then compared with the fileDate object. If the fileDate is after the ageLimit then validate(T obj) returns true. The final class in the validator package is the FileValidator, which as shown above delegates a lot of its responsibility to the other three other aggregated validators: one FileAgeValidator and two RegexValidator’s. @Service public class FileValidator implements Validator {  private static final Logger logger = LoggerFactory.getLogger(FileValidator.class);  @Value("${following.lines}")   private Integer extraLineCount;  @Autowired   @Qualifier("scan-for")   private RegexValidator scanForValidator;  @Autowired(required = false)   @Qualifier("exclude")   private RegexValidator excludeValidator;  @Autowired   private FileAgeValidator fileAgeValidator;  @Autowired   private Results results;  @Override   public <T> boolean validate(T obj) {    boolean retVal = false;     File file = (File) obj;     if (fileAgeValidator.validate(file)) {       results.addFile(file.getPath());       checkFile(file);       retVal = true;     }     return retVal;   }  private void checkFile(File file) {    try {       BufferedReader in = createBufferedReader(file);       readLines(in, file);       in.close();     } catch (Exception e) {       logger.error("Error whilst processing file: " + file.getPath() + " Message: " + e.getMessage(), e);     }   }  @VisibleForTesting   BufferedReader createBufferedReader(File file) throws FileNotFoundException {     BufferedReader in = new BufferedReader(new FileReader(file));     return in;   }  private void readLines(BufferedReader in, File file) throws IOException {     int lineNumber = 0;     String line;     do {       line = in.readLine();       if (isNotNull(line)) {         processLine(line, file.getPath(), ++lineNumber, in);       }     } while (isNotNull(line));   }  private boolean isNotNull(Object obj) {     return obj != null;   }  private int processLine(String line, String filePath, int lineNumber, BufferedReader in) throws IOException {    if (canValidateLine(line) && scanForValidator.validate(line)) {       List<String> lines = new ArrayList<String>();       lines.add(line);       addExtraDetailLines(in, lines);       results.addResult(filePath, lineNumber, lines);       lineNumber += extraLineCount;     }    return lineNumber;   }  private boolean canValidateLine(String line) {     boolean retVal = true;     if (isNotNull(excludeValidator)) {       retVal = !excludeValidator.validate(line);     }     return retVal;   }  private void addExtraDetailLines(BufferedReader in, List<String> lines) throws IOException {    for (int i = 0; i < extraLineCount; i++) {       String line = in.readLine();       if (isNotNull(line)) {         lines.add(line);       } else {         break;       }     }   }} The FileValidator’s validate(T obj) takes a File as an argument. Its first responsibility is to validate the age of the file. If that validator returns true, then it informs the Report class that it’s found a new, valid file. It then checks the file for errors, adding any it finds to the Report instance. It does this by using a BufferedReader to check each line of the file in turn. Before checking whether a line contains an error, it checks that the line isn’t excluded from the check – i.e. that it doesn’t match the excluded exceptions or ones we’re not interested in. If the line doesn’t match the excluded exceptions, then it’s checked for exceptions that we need to find using the second instance of the RegexValidator. If the line does contain an error it’s added to a List<String> object. A number of following lines are then read from the file added to the list to make the report more readable. And so, the file parsing continues, checking each line at a time looking for errors and building up a report, which can be processed later. That cover’s validating files using Delegate Pattern adding any exceptions found to the Report, but how does this Report object work? I’ve not mentioned it, and how is the output generated? More on that next time.The code for this blog is available on Github at: https://github.com/roghughe/captaindebug/tree/master/error-track.  Reference: Tracking Exceptions With Spring – Part 2 – Delegate Pattern from our JCG partner Roger Hughes at the Captain Debug’s Blog blog. ...
java-logo

Java Object Interning

Java stores the string contants appearing in the source code in a pool. In other words when you have a code like:                   String a = "I am a string"; String b = "I am a string"; the variables a and b will hold the same value. Not simply two strings that are equal but rather the very same string. In Java words a == b will be true. However this works only for Strings and small integer and long values. Other objects are not interned thus if you create two objects that hold exactly the same values they are usually not the same. They may and probably be equal but not the same objects. This may be a nuisance some time. Probably when you fetch some object from some persistence store. If you happen to fetch the same object more than one time you probably would like to get the same object instead of two copies. In other words I may also say that you only want to have one single copy in memory of a single object in the persistence. Some persistence layers do this for you. For example JPA implementations follow this pattern. In other cases you may need to perform caching yourself. In this example I will describe a simple intern pool implementation that can also be viewed on the stackoverflow topics. In this article I also explain the details and the considerations that led to the solution depicted there (and here as well). This article contains a but ore tutorial information than the original discussion. Object pool Interning needs an object pool. When you have an object and you want to intern that object you essentially look in the object pool to see if there is already an object equal to the one in hand. In case there is one we will use the one already there. If there is no object equal to the actual one then we put the actual object into the pool and then use this one. There are two major issues we have to face during implementation:Garbage Collection Multi-thread environmentWhen an object is not needed anymore it has to be removed from the pool. The removal can be done by the application but that would be a totally outdated and old approach. One of the main advantage of Java over C++ is the garbage collection. We can let GC collect these objects. To do that we should not have strong references in the object pool to the pooled objects. Reference If you know what soft, weak and phantom references, just jump to the next section. You may noticed that I did not simply say “references” but I said “strong references”. If you have learned that GC collects objects when there are no references to the object then it was not absolutely correct. The fact is that it is a strong reference that is needed for the GC to treat an object untouchable. To be even more precise the strong reference should be reachable travelling along other strong references from local variables, static fields and similar ubiquitous locations. In other word: the (strong) references that point point from one dead object to another does not count, they together will be removed and collected. So if these are strong references, then presumably there are not so strong references you may think. You are right. There is a class named java.lang.ref.Reference and there are three other classes that extend it. The classes are:PhantomReference WeakReference and SoftReferencein the same package. If you read the documentation you may suspect that what we need is the weak one. Phantom is out of question for use to use in the pool, because phantom references can not be used to get access to the object. Soft reference is an overkill. If there are no strong references to the object then there is no point to keep it in the pool. If it comes again from some source, we will intern it again. It will certainly be a different instance but nobody will notice it since there is no reference to the previous one. Weak references are the ones that can be use to get access to the object but does not alter the behavior of the GC. WeakHashMap Weak reference is not the class we have to use directly. There is a class named WeakHashMap that refers to the key objects using soft references. This is actually what we need. When we intern an object and want to see if it is already in the pool we search all the objects to see if there is any equal to the actual one. A map is just the thing that implements this search capability. Holding the keys in weak references will just let the GC collect the key object when nobody needs it. We can search so far, which is good. Using a map we also have to get some value. In this case we just want to get the same object, so we have to put the object into the map when it is not there. However putting there the object itself would ruin what we gained keeping only weak references for the same object as a key. We have to create and put a weak reference to the object as a key. WeakPool After that explanation here is the code. It just says if there is an object equal to the actual one then get(actualObject) should return it. If there is none, get(actualObject) will return null. The method put(newObject) will put a new object into the pool and if there was any equal to the new one, it will overwrite the place of the old one with the new. public class WeakPool<T> { private final WeakHashMap<T, WeakReference<T>> pool = new WeakHashMap<T, WeakReference<T>>(); public T get(T object){ final T res; WeakReference<T> ref = pool.get(object); if (ref != null) { res = ref.get(); }else{ res = null; } return res; } public void put(T object){ pool.put(object, new WeakReference<T>(object)); } } InternPool The final solution to the problem is an intern pool, that is very easy to implement using the already available WeakPool. The InternPool has a weak pool inside, and there is one single synchronized method in it intern(T object). public class InternPool<T> { private final WeakPool<T> pool = new WeakPool<T>(); public synchronized T intern(T object) { T res = pool.get(object); if (res == null) { pool.put(object); res = object; } return res; } } The method tries to get the object from the pool and if it is not there then puts it there and then returns it. If there is a matching object already there then it returns the one already in the pool. Multi-thread The method has to be synchronized to ensure that the checking and the insertion of the new object is atomic. Without the synchronization it may happen that two threads check two equal instances in the pool, both of them find that there is no matching object in it and then they insert their version into the pool. One of them, the one putting its object later will be the winner overwriting the already there object but the looser also thinks that it owns the genuine single object. Synchronization solves this problem. Racing with the Garbage Collector Even though the different threads of the java application using the pool can not get into trouble using the pool at the same time we still should look at it if there is any interference with the garbage collector thread. It may happen that the reference gets back null when the weak reference get method is called. This happens when the key object is reclaimed by the garbage collector but the weak hash map in the weak poll implementation still did not delete the entry. Even if the weak map implementation checks the existence of the key whenever the map is queried it may happen. The garbage collector can kick in between the call of get() to the weak hash map and to the call of get() to the weak reference returned. The hash map returned a reference to an object that existed by the time it returned but, since the reference is weak it was deleted until the execution of our java application got to the next statement. In this situation the WeakPool implementation returns null. No problem. InternPool does not suffer from this also. If you look at the other codes in the beforementioned stackoverflow topics, you can see a code: public class InternPool<T> {private WeakHashMap<T, WeakReference<T>> pool = new WeakHashMap<T, WeakReference<T>>();public synchronized T intern(T object) { T res = null; // (The loop is needed to deal with race // conditions where the GC runs while we are // accessing the 'pool' map or the 'ref' object.) do { WeakReference<T> ref = pool.get(object); if (ref == null) { ref = new WeakReference<T>(object); pool.put(object, ref); res = object; } else { res = ref.get(); } } while (res == null); return res; } } In this code the author created an infinite loop to handle this situation. Not too appealing, but it works. It is not likely that the loop will be executed infinite amount of time. Likely not more than twice. The construct is hard to understand, complicated. The morale: single responsibility principle. Focus on simple things, decompose your application to simple components. Conclusion Even though Java does interning only for String and some of the objects that primitive types are boxed to it is possible and sometimes desirable to do interning. In that case the interning is not automatic, the application has to explicitly perform it. The two simple classes listed here can be used to do that using copy paste into your code base or you can: <dependency> <groupId>com.javax0</groupId> <artifactId>intern</artifactId> <version>1.0.0</version> </dependency> import the library as dependency from the maven central plugin. The library is minimal containing only these two classes and is available under the Apache license. The source code for the library is on GitHub.   Reference: Java Object Interning from our JCG partner Peter Verhas at the Java Deep blog. ...
java-logo

3 Reasons to choose Vert.x

Modern web applications and the rise of mobile clients redefined what is expected from a web server. Node.js was the first technology that recognized the paradigm shift and offered a solution. The application platform Vert.x takes some of the innovations from Node.js and makes them available on the JVM, combining fresh ideas with one of the most sophisticated and fastest runtime environments available. Vert.x comes with a set of exciting features that make it interesting for anybody developing web applications.   Non-blocking, event driven runtime Vert.x provides a non-blocking, event-driven runtime. If a server has to do a task that requires waiting for a response (e.g. requesting data from a database) there are two possibilities how this can be implemented: blocking and non-blocking. The traditional approach is a synchronous or blocking call. The program flow pauses and waits for the answer to return. To be able to handle more than one request in parallel, the server would execute each request in a different thread. The advantage is a relatively simple programming model, but the downside is a significant amount of overhead if the number of threads becomes large. The second solution is a non-blocking call. Instead of waiting for the answer, the caller continues execution, but provides a callback that will be executed once data arrives. This approach requires a (slightly) more complex programming model, but has a lot less overhead. In general a non-blocking approach results in much better performance when a large number of requests need to be served in parallel. Simple to use concurrency and scalabilityVert.x applications are written using an Actor-like concurrency model. An application consists of several components, the so-called Verticles, which run independently. A Verticle runs single-threaded and communicates with other Verticles by exchanging messages on the global event-bus. Because they do not share state, Verticles can run in parallel. The result is an easy to use approach for writing multi-threaded applications. You can create several Verticles which are responsible for the same task and the runtime will distribute the workload among them, which means you can take full advantage of all CPU cores without much effort. Verticles can also be distributed between several machines. This will be transparent to the application code. The Verticles use the same mechanisms to communicate as if they would run on the same machine. This makes it extremely easy to scale your application.  Polyglot Unlike many other application platforms, Vert.x is polyglot. Applications can be written in several languages. It is even possible to use different languages in the same application. At this point Java, Python, Groovy, Ruby, and JavaScript can be used and support for Scala and Clojure is on the way. Conclusion Vert.x is a relatively young platform and subsequently the ecosystem is not as rich as that of the more established platforms. Nevertheless for the most common tasks, there are extensions available. The advantages of Vert.x are astonishing. Its non-blocking, event-driven nature is extremely well-suited for modern web applications. Vert.x makes it easy to write concurrent applications that scale effortless from a single low-end machine to a cluster with several high-end servers. Add the fact that you can use most popular languages for the JVM and you have a web developers dream come true!   Reference: 3 Reasons to choose Vert.x from our JCG partner Michael Heinrichs at the Mike’s Blog blog. ...
junit-logo

Parameterized JUnit tests

Sometimes you encounter a problem that just screams for using “parameterized” tests rather than copy/pasting the same method many times.   The test method is basically the same and the only thing that changes is the data passed in.  In this case, consider creating a test case that utilitizes the ” Parameterized” class from JUnit. I recently ran into a problem where our validation of an email address did not allow unicode characters.  The fix was fairly straight-forward, change the regular expression to allow those characters.  Next, it was time to test the change.  Rather than copy/paste separate methods for each set of data, I decided to learn about the Parameterized method.   Below is the result.  The data includes the expected result and the email address to be validated. JUnit test class package com.mycompany.client; import static org.junit.Assert.*; import java.util.Arrays; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; import com.mycompany.test.TestServiceUtil; /** * Parameterized test case for validating email addresses against a regular expression. * We need to allow unicode characters in the userid portion of the email address, so * these test cases where created to help validate the validateEmailAddress method * in the FieldValidationController class. * * @author mmiller * */ @RunWith(Parameterized.class) public class TestFieldValiationController { @Parameters(name = "{index}: {1} is valid email address = {0}") public static Iterable<Object> data() { return Arrays.asList(new Object[][] { { true, "john@mycomp.com" }, { true, "john123@mycomp.com" }, { true, "j+._%20_-brown@mycomp.com" }, { true, "123@mycomp.com" }, { false, "john brown@mycomp.com" }, { false, "123@mycomp" }, { false, "john^brown@mycomp.com" }, { true , "1john@mycomp.com" }, { false, "john#brown@mycomp.com" }, { false, "john!brown@mycomp.com" }, { false, "john()brown@mycomp.com" }, { false, "john=brown@mycomp.com" }, { true, "johñ.brown@mycomp.com" }, { false, "john.brown@mycomp.coñ" }, { true, "johú@mycomp.com" }, { true, "johíáó@mycomp.com" } }); } private boolean expected; private String emailAddress; public TestFieldValiationController(boolean expected, String emailAddress) { this.expected = expected; this.emailAddress = emailAddress; TestServiceUtil.getInstance(); } @Test public void validateEmail() { assertEquals(expected, FieldValidationController.getInstance().validateEmailAddress(emailAddress)); } } Hope this helps!   Reference: Parameterized JUnit tests from our JCG partner Mike Miller at the Scratching my programming itch blog. ...
spring-data-logo

Spring Data Couchbase 1.0 GA Released

Spring Data Couchbase 1.0 GA release is here! The project is a part of the Spring Data project which aims to provide a familiar and consistent Spring-based programming model for new datastores while retaining store-specific features and capabilities. The Spring Data Couchbase project provides integration with the Couchbase Server database. There have been changes and improvements that led to the GA release of the project. It now consists of new features and additions, most markable of which are the support for custom converters, JSR-303 Validation support and built-in support for temporal objects like Dates, Calendars and similar JodaTime variants. Maven Central is updated with the new release artifacts. There are also plans for integrating Spring Data Couchbase with close projects like Spring Boot and Spring XD. ...
oracle-weblogic-logo

How to start multiple WebLogic managed servers

The WebLogic Server docs recommand you to create a dedicated admin server and then sepearate managed servers for application deployment. Here I will show you how to create one or more managed server in the same host as the admin server. I assume you already have WLS installed with your own domain created and running. If you haven’t done this before, you may refer to my previous blog on how to create and start WLS. After you started your domain (that’s the default admin server), then follow these steps.      Login into your http://localhost:7001/console webapp. On the right menu tree, click Environment > Servers. You should see “myserver(admin)” already listed. Click “New” button, enter Server Name: appserver and set Server Listen Port: 7002 Click “Next” and then “Finish” Now open a new terminal and run these commands:cd mydomain bin/startManagedWeblogic.sh appserver1 localhost:7001 You would need to enter the username and password, same as for your /console webapp. After this managed server is up and running you may send the process in the background by pressing CTRL+Z and then run bg command. Or you may use the servers/appserver/security/boot.properties file to bypass the user/password prompt on every restart of this managed server. Now you have one managed server started along with your admin server. After this you may start deploying application onto this managed server. All the webapp deployed would now accessable by its assigned port such as http://localhost: 7002/yourwebapp url. You may repeat the same for however number of managed server you like to run on the same host. Try to name your server name and port number unique.   Reference: How to start multiple WebLogic managed servers from our JCG partner Zemian Deng at the A Programmer’s Journal blog. ...
software-development-2-logo

Atom, Hackable Text Editor By GitHub

GitHub has recently announced its new text editor – Atom. It claimed it would be a 21st century text editor that highly customizable for you to do anything you want. It also allow developers hack to its core of the editor, just like vim and emacs. Built by Web Technology Atom is similar to Bracket Text Editor. They are both used web technologies to be a desktop application, which means it can easy to plug on different platform with a different wrapper? To make it happens, it is written in Node.js, where have over 50 thousand package are waiting for you to hack on and you can also start a web server within you editor. Features Highlight Atom itself will be keep as clean and you can import your pervious editor setting into Atom.File system browser Fuzzy finder for quickly opening files Fast project-wide search and replace Multiple cursors and selections Multiple panes Snippets Code folding A clean preferences UI Import TextMate grammars and themesFull Customizable for Short key, Theme and Plugins As mentioned above, you are able to hack on the core, theme and package (plugins).Customizing Atom Creating a Theme Creating a PackageScreenshotThe pricing You maybe concerning such a good editor, would it be charged? The answer is Yes. The President of GitHub Tom Preston-Werner has confirmed that: Atom won’t be closed source, but it won’t be open source either. It will be somewhere inbetween, making it easy for us to charge for Atom while still making the source available under a restrictive license so you can see how everything works. We haven’t finalized exactly how this will work yet. We will have full details ready for the official launch. Check it out If you want to try such awesome developer, hacker editor. You can apply a beta version onAtom.io  Reference: Atom, Hackable Text Editor By GitHub from our JCG partner Chow Chi Sum at the Coders Grid blog. ...
scala-logo

Scala: OOP basics

The most part of time when I’m coding I use Java. So it’s my main programming language. It satisfies me in all aspects, but I notice that Java is too verbose sometimes. That’s why I have switched to Scala. So all free time I spend now for learning Scala. I’m going to publish some notes in my blog, hence you are welcome to visit new page of my blog which will be dedicated to Scala. The first serious article will be about Scala OOP. Class declaration OOP in Scala allows to create classes. But the process of declaration differs comparing with Java. Here is how we can do it in Scala: class Person Nothing hard. And it’s not strange, we just declared absolutely useless class. Let’s move further to look at how we can upgrade the code in order to make it more practical. Primary constructor I’m going to add some parameters into the Person class which need to be specified during object creation. As many of you may guess, I’m talking about primary constructor: class Person(name: String, age: Int) A Primary constructor can be called directly when we are creating an object. Here is an example: val p1 = new Person("Alex", 24) And also it can be called indirectly in an overloaded constructor. An overloading is out of topic and I’ll discuss this in the following posts. Prefixed constructor parameters Ok, we have already declared the Person class and looks like it can be more or less useful, since it has some initialization parameters passed in the primary constructor. But how we can interact with these parameters after the initialization? scala> p1.name <console>:10: error: value name is not a member of Person p1.name As you can see, the attempt to access the name parameter of the Person instance class causes error. That’s because the name parameter doesn’t has any prefix in the primary constructor declaration. Hence the name and the age parameters are declared as private and they are not accessible outside the Person class. class Person(val name: String, var age: Int) I modified the Person class declaration. Now the name parameter will be supplied with a getter, because it is prefixed with val, and the age parameter will be supplied with a setter in addition to setter, because it is prefixed with var. scala> val p2 = new Person("Bobby", 25) p2: Person = Person@30374534scala> p2.name res1: String = Bobbyscala> p2.name = "Bob" <console>:9: error: reassignment to val p2.name = "Bob"scala> p2.age res2: Int = 25scala> p2.age = 26 p2.age: Int = 26 With help of val and var prefixes inside a primary constructor you are able to make classes more practical. Class body Let’s move further with OOP basics in Scala. Now I want to show how Scala classes can be more practical when they contain some additional functionality inside a class body. class Person(val name:String, var age:Int) { def introduce() = println(s"Hi, my name is ${name}, I'm ${age} years old") } Now an instance of the Person class is more social and he (she) may introduce himself to someone. Try the code: val p3 = new Person("Jhon", 33) p3.introduce Summary Well in the post I tried to make an overview of the most simplest and essential basics of Scala OOP. In my oppinion, Scala OOP is more complex than Java OOP. Maybe even not more complex, but it different and it definitely requires some time to get used to it.   Reference: Scala: OOP basics from our JCG partner Alexey Zvolinskiy at the Fruzenshtein’s notes blog. ...
apache-lucene-logo

Using Lucene’s search server to search Jira issues

You may remember my first blog post describing how the Lucene developers eat our own dog food by using a Lucene search application to find our Jira issues. That application has become a powerful showcase of a number of modern Lucene features such as drill sideways and dynamic range faceting, a new suggester based on infix matches, postings highlighter, block-join queries so you can jump to a specific issue comment that matched your search, near-real-time indexing and searching, etc. Whenever new users ask me about Lucene’s capabilities, I point them to this application so they can see for themselves. Recently, I’ve made some further progress so I want to give an update. The source code for the simple Netty-based Lucene server is now available on this subversion branch (see LUCENE-5376 for details). I’ve been gradually adding coverage for additional Lucene modules, including facets, suggesters, analysis, queryparsers, highlighting, grouping, joins and expressions. And of course normal indexing and searching! Much remains to be done (there are plenty of nocommits), and the goal here is not to build a feature rich search server but rather to demonstrate how to use Lucene’s current modules in a server context with minimal “thin server” additional source code. Separately, to test this new Lucene based server, and to complete the “dog food,” I built a simple Jira search application plugin, to help us find Jira issues, here. This application has various Python tools to extract and index Jira issues using Jira’s REST API and a user-interface layer running as a Python WSGI app, to send requests to the server and render responses back to the user. The goal of this Jira search application is to make it simple to point it at any Jira instance / project and enable full searching over all issues. I just pushed some further changes to the production site:I upgraded the Jira search application to the current server branch (previously it was running on my private fork). I switched all analysis components to Lucene’s analysis factories; these factories use Java’s SPI (Service Provider Interface) so that the server has access to any char filters, tokenizers and token filters in the classpath. This is very helpful when building a server because it means you don’t need any special code to handle the great many analysis components that Lucene provides these days. Everything simply passes through the factories (which know how to parse their own arguments). I’ve added the Tika project, so you can now find Tika issues as well. This was very simple to add, and seems be working! I inserted WordDelimiterFilter so that CamelCaseTokens are split. For example, try searching on infix and note the highlights. As Rober Muir reminded me, WordDelimiterFilter corrupts offsets, which will mess up highlighting in some cases, so I’m going to try to set up ICUTokenizer, which I’m already using, to do this splitting instead. I switched to Lucene’s new expressions module to do blended relevance + recency sort by default when you do a text search, which is helpful because most of the time we are looking for recently touched issues. Previously I used a custom FieldComparator to achieve the same functionality, but expressions is more compact and powerful and lets me remove that custom FieldComparator. I switched to near-real-time building of the suggestions, using AnalyzingInfixSuggester. Previously I was fully rebuilding the suggester every five minutes, so this saves a lot of CPU since now I just add new Jira issues as they come, and refresh the suggester. It also means a much shorter delay from when an index is added to when it can be suggested. See LUCENE-5477 for details. I now commit once per day. Previously I never committed, and simply relied on near-real-time searching. This works just fine, except when I need to bring the server down (e.g. to push new changes out), it required full reindexing, which was very fast but a poor user experience for those users who happened to do a search while it was happening. Now, when I bounce the server it comes back to the last commit and then the near-real-time indexing quickly catches up on any changed issues since that last commit. Various small issues, such as proper handling when a Jira issue is renamed (the Jira REST API does not make it so easy to discover this!); better production push automation; upgraded to a newer version of bootstrap UI library.There are still plenty of improvements to make to this Jira search application. For fields with many possible drill-down values, I’d like to have a simple suggester so the user can quickly drill down. I’d like to fix the suggester to filter suggestions according to the project. For example, if you’ve drilled down into Tika issues, then when you type a new search you should see only Tika issues suggested. For that we need to make AnalzyingInfixSuggester context aware. I’d also like a more compact UI for all of the facet fields; maybe I need to hide the less commonly used facet fields under a “More”…   Reference: Using Lucene’s search server to search Jira issues from our JCG partner Michael Mc Candless at the Changing Bits blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

15,153 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books