Featured FREE Whitepapers

What's New Here?


Beyond Thread Pools: Java Concurrency is Not as Bad as You Think

Apache Hadoop, Apache Spark, Akka, Java 8 streams and Quasar: The classic use cases to the newest concurrency approaches for Java developers There’s a lot of chatter going around about newer concepts in concurrency, yet many developers haven’t had a chance to wrap their heads around them yet. In this post we’ll go through the things you need to know about Java 8 streams, Hadoop, Apache Spark, Quasar fibers and the Reactive programming approach – and help you stay in the loop, especially if you’re not working with them on a regular basis. It’s not the future, this is happening right now. What are we dealing with here? When talking about concurrency, a good way to characterize the issue at hand is answering a few questions to get better feel for it:Is it a data processing task? If so, can it be broken down to independent pieces of work? What’s the relationship between the OS, the JVM and your code? (Native threads Vs. light-weight threads) How many machines and processors are involved? (Single core Vs. Multicore)Let’s go through each of these and figure out the best use cases to each approach. 1. From Thread Pools to Parallel Streams Data processing on single machines, letting the Java take care of thread handling With Java 8, we’ve been introduced to the new Stream API that allows applying aggregate operations like Filter, Sort or Map on streams of data. Another thing Streams allow are parallel operations on multicore machines when applying .parallelStream() – Splitting the work between threads using the Fork/Join framework introduced in Java 7. An evolution from the Java 6 java.util.concurrency library, where we met the ExecutorService which creates and handles our worker thread pools. Fork/Join is also built on top of the ExecuterService, the main difference from a traditional thread pool is how they distribute the work between threads and thereby multicore machine support. With a simple ExecuterService you’re in full control of the workload distribution between worker threads, determining the size of each task for the threads to handle. With Fork/Join on the other hand, there’s a work-stealing algorithm in place that abstracts workload handling between threads. In a nutshell, this allows large tasks to be divided to smaller ones (forked), and processed in different threads, eventually joining the results – Balancing the the work between threads. However, it’s not a silver bullet. Sometimes Parallel Streams may even slow you down, so you’ll need to think it through. Adding .parallelStream() to your methods can cause bottlenecks and slowdowns (some 15% slower on this benchmark we ran), the fine line goes through the number of threads. Let’s say we’re already running multiple threads and we’re using .parallelStream() in some of them, adding more and more threads to the pool. This could easily turn into more than our cores could handle, and slow everything down due to increased context switching. Bottom line: Parallel Streams abstract handling threads on a single machine in a way that distributes the workload between your cores. However, if you want to use them efficiently it’s critical to keep the hardware in mind not spawn more threads than your machine can handle. 2. Apache Hadoop and Apache Spark Heavy duty lifting: Big data processing across multiple machines Moving on to multiple machines, petabytes of data, and tasks that resemble pulling all tweets that mention Java from twitter or heavy duty machine learning algorithms. When speaking of Hadoop, it’s important to take another step and think of the wider framework and its components: The Hadoop Distributed File System (HDFS), a resource management platform (YARN), the data processing module (MapReduce) and other libraries and utilities needed for Hadoop (Common). On top of these come other optional tools like a database which runs on top of HDFS (HBase), a platform for a querying language (Pig), and a data warehouse infrastructure (Hive) to name a few of the popular ones. This is where Apache Spark steps in as a new data processing module, famous for its in-memory performance and the use of fast performing Resilient Distributed Datasets (RDDs), unlike the Hadoop MapReduce which doesn’t employ in-memory (and on-disk) operations as efficiently. The latest benchmark released by Databricks shows that Spark was 3x faster than Hadoop in sorting a petabyte of data, while using 10x less nodes.The classic use case for Hadoop would be querying data, while Spark is getting famous for its fast runtimes of machine learning algorithms. But this is only the tip of the iceberg, as stated by Databricks: “Spark enables applications in Hadoop clusters to run up to 100x faster in memory, and 10x faster even when running on disk”. Bottom line: Spark is the new rising star in Hadoop’s ecosystem. There’s a common misconception that we’re talking about something unrelated or competing, but I believe that what we’re seeing here is the evolution of the framework. 3. Quasar fibers Breaking native threads to virtual light-weight threads We’ve had the chance to run through the Hadoop, now let’s back to single machines. In fact, let’s zoom in even further than the standard multithreaded Java application and focus on one single thread. As far as we’re concerned, HotSpot JVM threads are the same as native OS threads, holding one thread and running “virtual” threads within it is what fibers are all about. Java doesn’t have a native fibers support, but no worries, Quasar by Parallel Universe got us covered. Quasar is an open source JVM library that supports fibers (Also known as light weight threads), and also acts as an Actor Framework, which I’ll mention later. Context switching is the name of the game here. As we’re limited by the number of cores, once the native thread count grows larger we’re subjected to more and more context switching overhead. One way around this is fibers, using a single thread that supports “multithreading”. Looks like a case of threadcepiton. Fibers can also be seen as an evolution from thread pools, dodging the dangers of thread overload we went through with Parallel Streams. They make it easier to scale threads and allow a significantly larger number of concurrent “light” threads. They’re not intended to replace threads and should be used for code that blocks relatively often, it’s like they’re acting as true async threads. Bottom line: Parallel Universe is offering a fresh approach to concurrency in Java, haven’t reached v1.0 yet but definitely worth checking out. 4. Actors & Reactive Programming A different model for handling concurrency in Java In the Reactive Manifesto, the new movement is described with 4 principles: Responsive, Resilient, Elastic and Message-Driven. Which basically means fast, fault tolerant, scalable and suuports non-blocking communication. Let’s see how Akka Actors support that. To simplify things, think of Actors as people that have a state and a certain behavior, communicating by exchanging messages that go to each other’s mailbox. An Actor system as a whole should be created per application, with a hierarchy that breaks down tasks to smaller tasks so that each actor has only one supervising actor at most. An actor can either take care of the task, break it down event further with delegation to another actor or in case of failure, escalate it to his supervisor. Either way, messages shouldn’t include behavior or share mutable states, each Actor has an isolated stated and behavior of its own. It’s a paradigm shift from the concurrency models most developers are used to. And a bit of an off-shoot from the evolution in the first 3 topics we covered here. Although its roots stem back from the 70’s, its been under the radar just until recent years with a revival to better fit modern application demands. Parallel Universe’s Quasar also supports Actor, based on its light-weight threads. The main difference in implementation lies in the fibers/light-weight threads. Bottom line: Taking on the Actor model takes managing thread pools off your back, leaving it to the toolkit. The revival of interest comes from the kind of problems applications deal with today, highly concurrent systems with much more cores that we can work with. Conclusion We’ve ran through 4 methods to solve problems using concurrent or parallel algorithms with the most interesting approaches to tackle today’s challenges. Hopefully this helped pique your interest and get a better view of the hot topics in concurrency today. Going beyond the thread pools, there’s a trend of delegating this responsibly to the language and its tools – Focusing dev resources on shipping new functionality rather than spending countless hours solving race conditions and locks.Reference: Beyond Thread Pools: Java Concurrency is Not as Bad as You Think from our JCG partner Alex Zhitnitsky at the Takipi blog....

Exploration of ideas

There are many professionals for an individual to choose and I believe that e should follow the professional that he like most or hate least. The chance of success and quality of life are both much better that way. So, if you ask me why I choose software development as a career, I can assure you that programming is a fun career. Of course, it is fun not by sitting in front of the monitor and typing endlessly on the keyboard. It is fun because we control what we write and we can let our innovation run wild. In this article, I want to share some ideas that I have tried and please see for your self if any idea fits your work.   TOOLS Using OpenExtern and AnyEdit plugins for Eclipse Of all the plugins that I have tried, these two plugins are my two most favorites. They are small, fast and stable. OpenExtern give you the shortcut to open file explorer or console for any eclipse resource. At the beginning day of using Eclipse, I often found myself opening project properties just to copy project folder to open console or file explorer. OpenExtern plugin make the tedious 5 to 10 seconds process become 1 second mouse click. This simple tool actually helps a lot because many of us running Maven or Git commands from console. The other plugin that I find useful is AnyEdit. It adds handful of converting, comparing and sorting tool to Eclipse editor. It eliminates the need to use external editor or comparison tool for Eclipse. I also like to turn on auto formatting and removing trailing spaces when saving. This works pretty well if all of us have the same formatter configuration (line wrapping, indentation,…). Before we standardized our code formatter, we had a hard time comparing and merging source code after each check in. Other than these two, in the past, I often installed the Decompiler and Json Editor plugins. However, as most of Maven artifacts now a day are uploaded with source code and Json content can be viewed easily using Chrome Json plugin, I do not find these plugins useful anymore. Using JArchitect to monitor code quality In the past, we monitored the code quality of projects with eye balls. That seems good enough when you have time to take care of all modules. Things get more complicated when the team is growing or multiple team working on the same project. Eventually, the code still need to be reviewed and we need to be alerted if things go wrong. Fortunately, I got an offer from JArchitect to try out their latest product. First and foremost, this is a stand alone product rather than traditional way of integrating to IDE. For me, it is a plus point because you may not want to make your IDE too heavy. The second good thing is JArchitect can understand Maven, which is a rare feature in the market. The third good thing is that JArchitect create their own project file in their own workspace, which make no harm to the original Java project. Currently, the product is commercial but you may want to take a look to see if the benefit justify cost.SETTING UP PROJECTS As we all know, a Java web project has both unit test and functional  test. For functional test, if you use framework like Spring MVC, it is possible to create tests for controller without bringing up the server. Otherwise, it is quite normal that developers need to start up server, run functional test, then shutdown the server. This process is a bit tedious given the fact that the project may be created by someone else, which we have never communicated before. Therefore, we are trying to setup project in such a way that people can just download it and run without much hassle. Server In the past, we had the server setup manually for each local development box and the integration server (Cruise Control or Hudson). Gradually, our common practice shift toward checking in the server to every project. The motivation behind this move is saving of effort to setup a new project after checking out. Because the server is embedded inside project, there is no need to download or setup the server for each box. Moreover, this practice discourages shared server among projects, which is less error prone. Other than server, there are two other elements inside a project that are server dependence. They are properties files and database. For each of this element, we have slightly different practice, depending on situation. Properties file Checking in properties template rather than properties file Each developer need to clone the template file and edit when necessary. For continuous integration server, it is a bit trickier. Developer can manually create the file in the workspace or simply check in the build server properties file to the project. The former practice is not used any more because it is too error prone. Any attempt to clean workspace will delete properties file and we cannot track back the modification of properties in the file. Moreover, as we setup Jenkins as a cluster rather than single node like in the past, it is not applicable any more. For second practice, rather than checking in my-project.properties, I would rather checkin my-project.properties-template and my-project.properties-jenkins. The first file can be used as guidance to setup local properties file while the second can be renamed and used for Jenkins. Using host name to define external network connection This practice may work better sometimes if we need to setup similar network connections for various projects. Let say we need to configure database for around 10 similar project pointing to the same database. In this case, we can hard code the database server in the properties file and manually setup the host file in each build node to point the pre-defined domain of the database. For non essential properties, provide as much default value as possible There is nothing much to say about this practice. We only need to remind ourselves to be less lazy so that other people may enjoy handling our projects. Using LandLord Service This is a pretty new practice that we only apply from last year. With the regulation in our office, Web Service is the only team that manage and have access to UAT and PRODUCTION server. That why we need to guide them and they need to do the repetitive setup for at least 2 environments, both normally require clustering. It is quite tedious to them until the day we consolidate the properties of all environments, all project to a single landlord service. From that time, each project would start up and connecting to the landlord service with an environment id and application id. The landlord will happily serve them all the information they need. Database Using DBUnit to setup database once and let each test case automatically roll back transaction This is the traditional practice which is still work quite well now a day. However, it still require developer to create an empty schema for DBUnit to connect to. For it to work, the project must have a transaction management framework that support auto roll back in test environment. It also requires that the database operation happen within the test environment. For example, if in the functional test, we attempt to send a HTTP request to the server. In this case, the database operation happen in the server itself rather than in the test environment wen cannot do anything to roll it back. Running database in memory This is a built-in feature of Play framework. Developer will work with in-memory database in development mode and external database in production mode. This is doable as long as developer only work with JPA entities and has no knowledge of underlying database system. Database evolutions This is a borrowed idea from Ror. Rather than setting up the database from beginning, we can simply check current version and sequentially run the delta script so that the database got the wanted schema. Similar to above, it is expensive to do it yourselves unless there is native support from a framework like Ror or Play. CODING I have been in the industry for 10 years and I can tell you that software development is like fashion. There is no clear separation between good and bad practice. Whatever things are classified as bad practices may come back another day as new best practices. Let summarize some of the heaty debates we have. Access modifiers for class attributes Most of us were taught that we should hide class attributes from external access. Instead, we suppose to create getter and setter for each attribute. I strictly followed this rule in my early days even I did not know that most IDE can auto generate getter and setter. However, later, I was introduced to another idea that setter is dangerous. It allows other developers to spoil your object and mess up the system. In this case, you should create immutable object and do not provide the attribute setter. The challenge is how should we write our Unit Test if we hide the setter for class attributes. Sometimes, the attributes is inserted to the object using IOC framework like Spring. If there is no framework around, we may need to use reflection util to insert mock dependency to the test object. I have seen many developers solving problem this way but I think it is over-engineering. If we compromise a bit, it is much more convenient to use package modifier for the attribute. As best practices, test cases will always be on the same package with implementation and we should have no issue injecting mock dependencies. The package is normally controlled and contributed by the same individual or team; therefore the chance of spoiling object is minimal. Finally, as package modifier is the default modifier, it saves a few bytes of your codes. Monolithic application versus Micro Service architecture When I joined industry, the word “Enterprise” mean lots of xml, lots of deployment steps, huge application, many layers and the code is very domain oriented. As things evolve, we adopted ORM, learned to split business logic from presentation logic, learn how to simplify and enrich our application with AOP. It goes well until I was told again that the way to move forward is Micro Service Architect, which make Java Enterprise application functions similarly to a Php application. The biggest benefit we have with Java now may be the performance advantage of Java Vitual Machine. When adopting Micro Service architecture, it is obvious that our application will be database oriented. By removing the layers, it also minimize the benefit of AOP. The code base sure will shrink but it will be harder to write Unit Test.Reference: Exploration of ideas from our JCG partner Nguyen Anh Tuan at the Developers Corner blog....

Beginner’s Guide to Hazelcast Part 2

This article continues the series that I have started featuring Hazelcast, a distributed, in-memory database. If one has not read the first post, please click here. Distributed Collections Hazelcast has a number of distributed collections that can be used to store data. Here is a list of them:        IList ISet IQueueIList IList is a collection that keeps the order of what is put in and can have duplicates. In fact, it implements the java.util.List interface. This is not thread safe and one must use some sort of mutex or lock to control access by many threads. I suggest Hazelcast’s ILock. ISet ISet is a collection that does not keep order of the items placed in it. However, the elements are unique. This collection implements the java.util.Set interface. Like ILists, this collection is not thread safe. I suggest using the ILock again. IQueue IQueue is a collection that keeps the order of what comes in and allows duplicates. It implements the java.util.concurrent.BlockingQueue so it is thread safe. This is the most scalable of the collections because its capacity grows as the number of instances go up. For instance, lets say there is a limit of 10 items for a queue. Once the queue is full, no more can go in there unless another Hazelcast instance comes up, then another 10 spaces are available. A copy of the queue is also made. IQueues can also be persisted via implementing the interface QueueStore. What They Have in Common All three of them implement the ICollection interface. This means one can add an ItemListener to them.  This lets one know when an item is added or removed. An example of this is in the Examples section. Scalablity As scalability goes, ISet and IList don’t do that well in Hazelcast 3.x. This is because the implementation changed from being map based to becoming a collection in the MultiMap. This means they don’t partition and don’t go beyond a single machine. Striping the collections can go a long way or making one’s own that are based on the mighty IMap. Another way is to implement Hazelcast’s spi. Examples Here is an example of an ISet, IList and IQueue. All three of them have an ItemListener. The ItemListener is added in the hazelcast.xml configuration file. One can also add an ItemListener programmatically for those inclined. A main class and the snippet of configuration file that configured the collection will be shown. CollectionItemListener I implemented the ItemListener interface to show that all three of the collections can have an ItemListener. Here is the implementation: package hazelcastcollections;import com.hazelcast.core.ItemEvent; import com.hazelcast.core.ItemListener;/** * * @author Daryl */ public class CollectionItemListener implements ItemListener {@Override public void itemAdded(ItemEvent ie) { System.out.println(“ItemListener – itemAdded: ” + ie.getItem()); }@Override public void itemRemoved(ItemEvent ie) { System.out.println(“ItemListener – itemRemoved: ” + ie.getItem()); }} ISet Code package hazelcastcollections.iset;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.ISet;/** * * @author Daryl */ public class HazelcastISet {/** * @param args the command line arguments */ public static void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); HazelcastInstance instance2 = Hazelcast.newHazelcastInstance(); ISet<String> set = instance.getSet(“set”); set.add(“Once”); set.add(“upon”); set.add(“a”); set.add(“time”);ISet<String> set2 = instance2.getSet(“set”); for(String s: set2) { System.out.println(s); }System.exit(0); }} Configuration <set name=”set”> <item-listeners> <item-listener include-value=”true”>hazelcastcollections.CollectionItemListener</item-listener> </item-listeners> </set> IList Code package hazelcastcollections.ilist;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.IList;/** * * @author Daryl */ public class HazelcastIlist {/** * @param args the command line arguments */ public static void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); HazelcastInstance instance2 = Hazelcast.newHazelcastInstance(); IList<String> list = instance.getList(“list”); list.add(“Once”); list.add(“upon”); list.add(“a”); list.add(“time”);IList<String> list2 = instance2.getList(“list”); for(String s: list2) { System.out.println(s); } System.exit(0); }} Configuration <list name=”list”> <item-listeners> <item-listener include-value=”true”>hazelcastcollections.CollectionItemListener</item-listener> </item-listeners> </list>  IQueue Code I left this one for last because I have also implemented a QueueStore. There is no call on IQueue to add a QueueStore.  One has to configure it in the hazelcast.xml file. package hazelcastcollections.iqueue;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.IQueue;/** * * @author Daryl */ public class HazelcastIQueue {/** * @param args the command line arguments */ public static void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); HazelcastInstance instance2 = Hazelcast.newHazelcastInstance(); IQueue<String> queue = instance.getQueue(“queue”); queue.add(“Once”); queue.add(“upon”); queue.add(“a”); queue.add(“time”);IQueue<String> queue2 = instance2.getQueue(“queue”); for(String s: queue2) { System.out.println(s); }System.exit(0); }} QueueStore Code package hazelcastcollections.iqueue;import com.hazelcast.core.QueueStore; import java.util.Collection; import java.util.Map; import java.util.Set; import java.util.TreeMap; import java.util.TreeSet; /** * * @author Daryl */ public class QueueQStore implements QueueStore<String> {@Override public void store(Long l, String t) { System.out.println(“storing ” + t + ” with ” + l); }@Override public void storeAll(Map<Long, String> map) { System.out.println(“store all”); }@Override public void delete(Long l) { System.out.println(“removing ” + l); }@Override public void deleteAll(Collection<Long> clctn) { System.out.println(“deleteAll”); }@Override public String load(Long l) { System.out.println(“loading ” + l); return “”; }@Override public Map<Long, String> loadAll(Collection<Long> clctn) { System.out.println(“loadAll”); Map<Long, String> retMap = new TreeMap<>(); return retMap; }@Override public Set<Long> loadAllKeys() { System.out.println(“loadAllKeys”); return new TreeSet<>(); }} Configuration Some mention needs to be addressed when it comes to configuring the QueueStore. There are three properties that do not get passed to the implementation. The binary property deals with how Hazelcast will send the data to the store. Normally, Hazelcast stores the data serialized and deserializes it before it is sent to the QueueStore. If the property is true, then the data is sent serialized. The default is false. The memory-limit is how many entries are kept in memory before being put into the QueueStore. A 10000 memory-limit means that the 10001st is being sent to the QueueStore. At initialization of the IQueue, entries are being loaded from the QueueStore. The bulk-load property is how many can be pulled from the QueueStore at a time. <queue name=”queue”> <max-size>10</max-size> <item-listeners> <item-listener include-value=”true”>hazelcastcollections.CollectionItemListener</item-listener> </item-listeners> <queue-store> <class-name>hazelcastcollections.iqueue.QueueQStore</class-name> <properties> <property name=”binary”>false</property> <property name=”memory-limit”>10000</property> <property name=”bulk-load”>500</property> </properties> </queue-store> </queue>  Conclusion I hope one has learned about distributed collections inside Hazelcast. ISet, IList and IQueue were discussed. The ISet and IList only stay on the instance that they are created while the IQueue has a copy made, can be persisted and its capacity increases as the number of instances increase. The code can be seen here. ReferencesThe Book of Hazelcast: www.hazelcast.com Hazelcast Documentation (comes with the hazelcast download)Reference: Beginner’s Guide to Hazelcast Part 2 from our JCG partner Daryl Mathison at the Daryl Mathison’s Java Blog blog....

Lightweight Integration Tests for Eclipse Extensions

Recently I introduced a little helper for Eclipse extension point evaluation. The auxiliary strives to reduce boilerplate code for common programming steps, while increasing development guidance and readability at the same time. This post is the promised follow-up that shows how to combine the utility with an AssertJ custom assert to write lightweight integration tests for Eclipse extensions. Eclipse Extensions Loose coupling is in Eclipse partially achieved by the mechanism of extension-points and extensions. Whereby an extension serves as a contribution to a particular extension-point. However the declarative nature of extensions and extension-points leads sometimes to problems, which can be difficult to trace. This may be the case if by accident the extension declaration has been removed, the default constructor of an executable extension has been expanded with parameters, the plugin.xml has not been added to the build.properties or the like. Depending on the PDE Error/Warning settings one should be informed about a lot of these problems by markers, but somehow it happens again and again that contributions are not recognized and valuable time gets lost with error tracking. Because of this it might be helpful to have lightweight integration tests in place to verify that a certain contribution actually is available. For general information on how to extend Eclipse using the extension point mechanism you might refer to the Plug-in Development Environment Guide of the online documentation. Integration tests with JUnit Plug-in Tests Given the extension-point definition of the last post…… an extension contribution could look like this: <extension point="com.codeaffine.post.contribution"> <contribution id="myContribution" class="com.codeaffine.post.MyContribution"> </contribution> </extension> Assuming that we have a test-fragment as described in Testing Plug-ins with Fragments, we could introduce a PDETest to verify that the extension above with the given id exists and is instantiable by a default constructor. This test makes use of the RegistryAdapter introduced by the previous post and a specific custom assert called ExtensionAssert: public class MyContributionPDETest { @Test public void testExtension() { Extension actual = new RegistryAdapter() .readExtension( "com.codeaffine.post.contribution" ) .thatMatches( attribute( "id", "myContribution" ) ) .process(); assertThat( actual ) .hasAttributeValue( "class", MyContribution.class.getName() ) .isInstantiable( Runnable.class ); } } As described in the previous post RegistryAdapter#readExtension(String) reads exactly one extension for the given ‘id’ attribute. In case it detects more than one contribution with this attribute, an exception would be thrown. ExtensionAssert#assertThat(Extension) (used via static import) provides an AssertJ custom assert that provides some common checks for extension contributions. The example verifies that the value of the ‘class’ attribute matches the fully qualified name of the contribution’s implementation type, that the executable extension is actually instantiable using the default constructor and that the instance is assignable to Runnable. Where to get it? For those who want to check it out, there is a P2 repository that contains the features com.codeaffine.eclipse.core.runtime and com.codeaffine.eclipse.core.runtime.test.util providing the RegistryAdapter and the ExtensionAssert. The repository is located at:http://fappel.github.io/xiliary/and the source code and issue tracker is hosted at:https://github.com/fappel/xiliaryAlthough documentation is missing completely at this moment, it should be quite easy to get started with the given explanations of this and the previous post. But keep in mind that the features are in a very early state and probably will undergo some API changes. In particular assertions of nested extensions seems a bit too weak at the moment. In case you have ideas for improvement or find some bugs the issue tracker is probably the best place to handle this, for everything else feel free to use the comment section below.Reference: Lightweight Integration Tests for Eclipse Extensions from our JCG partner Frank Appel at the Code Affine blog....

Storing the state of an activity of your Android application

This is the last post in my series about saving data in your Android application. The previous posts went over the various way to save data in your application: Introduction : How to save data in your Android application Saving data to a file in your Android application Saving preferences in your Android application Saving to a SQLite database in your Android application This final post will explain when you should save the current state of your application so you users do not lose their data. There are two types of states that can be saved: you can save the current state of the UI if the user is interrupted while entering data so he can resume input when the application is started again, or you can save the data to a more permanent store of data to access it at any time. Saving the current UI state When another application is launched and hides your application, the data your user entered may not be ready to be saved to a permanent store of data yet since the input is not done. On the other hand, you should still save the current state of the activity so the user don’t lose all their work if a phone call comes in for example. A configuration change with the Android device like rotating the screen will also have the same effect, so this is another good reason to save the state. When one of those event or another event that requires saving the state of the activity occurs, the Android SDK calls the onSaveInstanceState method of the current activity, which receives an android.os.Bundle object as a parameter. If you use standard views from the Android SDK and those views have unique identifiers, the state of those controls will be automatically saved to the bundle. But if you use multiple instances of a view that have the same identifier, for example by repeating a view using a ListView, the values entered in your controls will not be saved since the identifier is duplicated. Also, if you create your own custom controls, you will have to save the state of those controls. If you need to manually save the state, you must override the onSaveInstanceState method and add your own information to the android.os.Bundle received as a parameter with pairs of key/values. This information will then be available later on when the activity needs to be restored in its onCreate and onRestoreInstanceState methods. All primitives types or arrays of values of those types can be saved to a bundle. If you want to save objects or an array of objects to the bundle they must implement the java.io.Serializable or android.os.Parcelable interfaces. To demonstrate saving to a state, I will use an upgraded version of the application used in the article about saving to a database that is available on GitHub at http://github.com/CindyPotvin/RowCounter. The application manages row counters used for knitting project, but it had no way to create a project. In the new version, the user can now create a new project,  and the state of the project being created needs to be saved if the user is interrupted while creating the project. For demonstration purpose, numerical values are entered using a custom CounterView control that does no handle saving the state, so we must save the state of each counter manually to the bundle. @Override public void onSaveInstanceState(Bundle savedInstanceState) { CounterView rowCounterAmountView = (CounterView)this.findViewById(R.id.row_counters_amount); savedInstanceState.putInt(ROW_COUNTERS_AMOUNT_STATE, rowCounterAmountView.getValue());CounterView rowAmountView = (CounterView)this.findViewById(R.id.rows_amount); savedInstanceState.putInt(ROWS_AMOUNT_STATE, rowAmountView.getValue());// Call the superclass to save the state of all the other controls in the view hierarchy super.onSaveInstanceState(savedInstanceState); } When the user navigates back to the application, the activity is recreated automatically by the Android SDK from the information that was saved in the bundle. At that point, you must also restore the UI state for the your custom controls. You can restore the UI state using the data you saved to the bundle from two methods of your activity: the onCreate method which is called first when the activity is recreated or the onRestoreInstanceState method that is called after the onStart method. You can restore the state in one method or the other and it most cases it won’t matter, but both are available in case some initialization needs to be done after the onCreate and onStart methods. Here are the two possible ways to restore the state from the activity using the bundle saved in the previous example : @Override protected void onCreate(Bundle savedInstanceState) { [...Normal initialization of the activity...]// Check if a previously destroyed activity is being recreated. // If a new activity is created, the savedInstanceState will be empty if (savedInstanceState != null) { // Restore value of counters from saved state CounterView rowCounterAmountView; rowCounterAmountView = (CounterView)this.findViewById(R.id.row_counters_amount); rowCounterAmountView.setValue(savedInstanceState.getInt(ROW_COUNTERS_AMOUNT_STATE));CounterView rowAmountView = (CounterView)this.findViewById(R.id.rows_amount); rowAmountView.setValue(savedInstanceState.getInt(ROWS_AMOUNT_STATE)); } } @Override protected void onRestoreInstanceState(Bundle savedInstanceState) { // Call the superclass to restore the state of all the other controls in the view hierarchy super.onRestoreInstanceState(savedInstanceState);// Restore value of counters from saved stat CounterView rowCounterAmountView = (CounterView)this.findViewById(R.id.row_counters_amount); rowCounterAmountView.setValue(savedInstanceState.getInt(ROW_COUNTERS_AMOUNT_STATE));CounterView rowAmountView = (CounterView)this.findViewById(R.id.rows_amount); rowAmountView.setValue(savedInstanceState.getInt(ROWS_AMOUNT_STATE)); } Remember, saving data in a bundle is not meant to be a permanent store of data since it only stores the current state of the view: it is not part of the activity lifecyle and is only called when the activity needs to be recreated or it is sent to the background. This means that the onSaveInstanceState method is not called when the application is destroyed since the activity state could never be restored. To save data that should never be lost you should save the data to one of the permanent data store described earlier in this series. But when should this data be stored? Saving your data to a permanent data store If you need to save data to a permanent data store when the activity is send to the background or destroyed for any reason, your must save your data in the onPause method of the activity. The onStop method is called only if the UI is completely hidden, so you cannot rely on it being raised all the time. All the essential data must be saved at this point because you have no control on what happens after: the user may kill the application for example and the data would be lost. In the previous version of the application, when the user incremented the counter for a row in a project the application saved the current value of the counter to the database every time. Now we’ll save the data only when the user leaves the activity, and saving at each counter press is no longer required: @Override public void onPause() { super.onPause(); ProjectsDatabaseHelper database = new ProjectsDatabaseHelper(this); // Update the value of all the counters of the project in the database since the activity is // destroyed or send to the background for (RowCounter rowCounter: mRowCounters) { database.updateRowCounterCurrentAmount(rowCounter); } Later on, if your application was not destroyed and the user accesses the activity again there are two possible processes that can occurs depending on how the Android OS handled your activity. If the activity was still in memory, for example if the user opened another application and came back immediately, the onRestart method is first called, followed by a call to the onStart method and finally the OnResume method is called and the activity is shown to the user. But if the activity was recycled and is being recreated,  for example if the user rotated the device so the layout is recreated, the process is the same as the one for a new activity: the onCreate method is first called, followed by a call to the onStart method and finally the onResume method is called and the activity is shown to the user. So, if you want to use the data that was saved to a permanent data store to initialize controls in your activity that lose their state, you should put your code in the onResume method since it is always called, regardless of if the activity was recreated or not. In the previous example, it is not necessary to explicitly restore the data since no custom controls were used: if the activity was recycled, it is recreated from scratch and the onCreate method initialize the controls from the data in the database. If the activity is still in memory, there is nothing else to do: the Android SDK handles showing the values as they were first displayed as explained earlier in the section about saving UI states. Here is a reminder of what happens in the onCreate method: @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.project_activity); Intent intent = getIntent(); long projectId = intent.getLongExtra("project_id", -1); // Gets the database helper to access the database for the application ProjectsDatabaseHelper database = new ProjectsDatabaseHelper(this); // Use the helper to get the current project Project currentProject = database.getProject(projectId);TextView projectNameView = (TextView)findViewById(R.id.project_name); projectNameView.setText(currentProject.getName()); // Initialize the listview to show the row counters for the project from // the database ListView rowCounterList = (ListView)findViewById(R.id.row_counter_list); mRowCounters = currentProject.getRowCounters(); ListAdapter rowCounterListAdapter = new RowCounterAdapter(this, R.layout.rowcounter_row, currentProject.getRowCounters()); rowCounterList.setAdapter(rowCounterListAdapter); } This concludes the series about saving data in your Android application. You now know about the various types of data storage that are available for Android applications and when should they be used so your users never lose their data and have the best user experience possible.Reference: Storing the state of an activity of your Android application from our JCG partner Cindy Potvin at the Web, Mobile and Android Programming blog....

Using PouchDB for Offline/Data Sync

Recently the term “Mobile First” received additional notoriety as the new CEO of Microsoft proclaimed the company’s shift in focus. As I’ve been researching mobile frameworks lately, I’ve run across another term – “Offline First.” As much as you may be online with your mobile phone or tablet, inevitably there will be times when you’re not connected, but still need to work. It’s in these times that mobile apps with offline/data sync capabilities shine. Not all app publishers want to store data on public clouds like iCloud, especially with recent security breaches. Several other options exist, and for this post, I’m going to look at the mobile side of one contender.   CouchDB and Data Synchronization CouchDB has been around for quite some time, and although it hasn’t received the same attention as MongoDB, it has a very compelling feature. Its data replication/synchronization feature is quite robust and mature. It offers “near-time” replication with the primary focus of fail-over for clusters of CouchDB servers. In fact, “Couch” is an acronym for Cluster of unreliable commodity hardware, and the CouchDB team chose the approach of “eventual data consistency” over immediate mirroring. So, if you’re looking for real-time updates on your mobile app, you should probably look into other alternatives like Socket.IO, etc. However, I’ve found that “near-time” is close enough (virtually indistinguishable) for my purposes. Several client libraries have sprung up around the CouchDB replication protocol (over HTTP) to make it easier for developers to store data locally and sync to remote servers when an internet connection is available. I’ve been looking into a few with one in particular that should be of interest to hybrid app developers. PouchDB – a JavaScript CouchDB Implementation PouchDB offers not only client data replication to remote servers, but also provides a robust key/value data store for the browser as well. And not only the browser, but any JavaScript environment with disk access, including Node.js. The PouchDB web site has an introductory tutorial that steps you through the process of building a simple single page app (SPA). The app is modeled after the vanilla JavaScript implementation of the excellent TodoMVC benchmark for JavaScript frameworks. If you’re looking for a great comparison of SPA frameworks by the way, I’d highly recommend checking out TodoMVC. In the PouchDB tutorial, they demonstrate the data sync capabilities using a hosted CouchDB instance running on Iris Couch. I have committed the completed tutorial code on GitHub using a local Node/Express/PouchDB server to demo its data sync capabilities. This way you can play around with the code without having to sign up for a hosted CouchDB instance on Iris. Note that if you’re planning to run this on Windows, you’ll need to be able to run node-gyp to build one of the Node modules. I’ve been able to set up my environment on Windows, but would recommend just running a Linux VM instead. Clone, Run and Explore As simple as: git clone https://github.com/DJacksonKHS/todo-sync.git cd todo-sync npm install npm start Then, browse to http://localhost:3000 to check out the app. Open a second tab in your browser to see your changes sync across clients. I’m also putting together a simple Cordova/PhoneGap app that I will commit to another repo, so check back soon to see this app running on a mobile device. It will include a running CouchDB or PouchDB Server instance, probably on Iris Couch or Heroku. Conclusion PouchDB has adapters for at least a few mobile SPA frameworks, including Backbone and Ember. I was unable to get the Ember adapter working properly – still not ready for prime time. I haven’t finished integrating the Backbone adapter yet, but will probably use it for the packaged hybrid app if I can’t get Ember working soon. If you’re looking into offline data sync for your mobile hybrid app, I highly recommend you look into PouchDB. Not only does it have an excellent data sync feature, but the key/value store modeled after CouchDB is also a great way to store and cache data locally. Hopefully this simple introduction will be helpful as you evaluate your options.Reference: Using PouchDB for Offline/Data Sync from our JCG partner Dave Jackson at the Keyhole Software blog....

How To Return Error Details From REST APIs

The HTTP protocol uses status codes to return error information. This facility, while extremely useful, is too limited for many use cases. So how do we return more detailed information? There are basically two approaches we can take:        Use a dedicated media type that contains the error details Include the error details in the used media typeDedicated Error Media Types There are at least two candidates in this space:Problem Details for HTTP APIs is an IETF draft that we’ve used in some of our APIs to good effect. It treats both problem types and instances as URIs and is extensible. This media type is available in both JSON and XML flavors. application/vnd.error+json is for JSON media types only. It also feels less complete and mature to me.A media type dedicated to error reporting has the advantage of being reusable in many REST APIs. Any existing libraries for handling them could save us some effort. We also don’t have to think about how to structure the error details, as someone else has done all that hard work for us. However, putting error details in a dedicated media type also increases the burden on clients, since they now have to handle an additional media type. Another disadvantage has to do with the Accept header. It’s highly unlikely that clients will specify the error media type in Accept. In general, we should either return 406 or ignore the Accept header when we can’t honor it. The first option is not acceptable (pun intended), the second is not very elegant. Including Error Details In Regular Media Type We could also design our media types such that they allow specifying error details. In this post, I’m going to stick with three mature media types: Mason, Siren, and UBER. Mason uses the @error property: { "@error": { "@id": "b2613385-a3b2-47b7-b336-a85ac405bc66", "@message": "There was a problem with one or more input values.", "@code": "INVALIDINPUT" } } The existing error properties are compatible with Problem Details for HTTP APIs, and they can be extended. Siren doesn’t have a specific structure for errors, but we can easily model errors with the existing structures: { "class": [ "error" ], "properties": { "id": "b2613385-a3b2-47b7-b336-a85ac405bc66", "message": "There was a problem with one or more input values.", "code": "INVALIDINPUT" } } We can even go a step further and use entities to add detailed sub-errors. This would be very useful for validation errors, for instance, where you can report all errors to the client at once. We could also use the existing actions property to include a retry action. And we could use the error properties from Problem Details for HTTP APIs. UBER has an explicit error structure: { "uber": { "version": "1.0", "error": { "data": [ { "name": "id", "value": "b2613385-a3b2-47b7-b336-a85ac405bc66" }, { "name": "message", "value": "There was a problem with one or more input values." }, { "name": "code", "value": "INVALIDINPUT" } ] } } } Again, we could reuse the error properties from Problem Details for HTTP APIs. Conclusion My advice would be to use the error structure of your existing media type and use the extensibility features to steal all the good ideas from Problem Details for HTTP APIs.Reference: How To Return Error Details From REST APIs from our JCG partner Remon Sinnema at the Secure Software Development blog....

The Illusion of Control

Why do we keep trying to attain control? We don’t like uncertainty, and control over a piece of reality relaxes us. A controlled environment is good for our health. In fact, when we are in control we don’t need to do anything. Things work out by themselves, for our satisfaction, because we’re in control. Being in control means never having to decide, because decisions are made for us. Decision making is hard. It’s tiring. If we get to control nirvana, it’s all smooth sailing from there. So we remain with just a tiny problem, because the amount of control in our world is… Slim to None I cannot coerce people to do my willing. I can try to persuade, incentivize, cajole, bribe, threaten. In the end, what they do is their choice. I can prepare big project plans and scream at my team to stick to it. I can give them the best tools to do their work, and they still might fail. As managers, we can’t control our employees. As an organization, we can’t control the market. Even as a person, all I can control are my actions. We want control, because it frees us from making decision. Maybe we should try making decisions easier. Visibility “You can’t have control, but you can have visibility” says my friend Lior Friedman. When we have visibility, the fog of uncertainty fades. Choices are clearer. Visibility is a product of our actions. It is under our control. As managers, we can create a safe environment where issues can appear, and we can do something about them, rather than keeping them secret until the last irresponsible moment. We can be honest with our customers to create trust between us, and improve our relationship with them, which will benefit both of us. If we can’t have control, we should stop put effort into it, and instead, invest in visibility. We’ll still be left with decisions. But life becomes a little less uncertain. That’s good. For everyone, including us.Reference: The Illusion of Control from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

How to use SSH tunneling to get to your restricted servers

Have you ever been told that in your network serverX can only be reached by a serverY via SSH? Now you have access to serverY from your own PC with normal SSH access as well, but just not directly to serverX. What can you do in situation like this if you need to access the restricted serverY? Well you can always ssh into serverY, then ssh again into serverX to check your work or log or whatever. But what happen if you have a database server or WebLogic Server instance running in serverX; and you want your local PC’s fancy tools to access the serverX? (Eg: Accessing the WLS admin console, or using SqlDeveloper to connect to your DB etc). In this case, that’s where ssh tunneling can help you, and here is how.Establish a connection to your serverY that you have access to from your PC. On top of that and at the same time, you will create a tunnel to serverX (your restricted server) by letting serverY redirect all the network traffic data back to your local PC on a specific port. Sounds scary, but it can be done with single command. For example this is how I can access the WLS Admin Console app that was running on server Y. On your own PC, open a terminal and run the following: bash> ssh -L 12345:serverY:7001 serverX Above will prompt you to access serverX with ssh credential. Once logged in, you need to keep the terminal open. Now the tunnel is established and redirecting traffic from port 7001 on serverY to your own PC on port 12345, which is where the WLS admin console is running. Open a browser on your own PC and type in address http://localhost:12345/console Now you should able to access your restricted serverY WLS admin console!Same can be done with a database server such as MySQL. For example, you will run: ssh -L 12346:serverY:3306 serverX and then change your SqlDeveloper JDBC connection url string to the tunnel port: jdbc:mysql://localhost:12346/mydb This is a cool technique to get around a secured environment.Reference: How to use SSH tunneling to get to your restricted servers from our JCG partner Zemian Deng at the A Programmer’s Journal blog....

SBT AutoPlugins Tutorial

This tutorial will guide you through the process of writing your own sbt plugin. There are several reasons to do this and it’s really simple:                  Add customized build steps to your continuous integration process Provide default settings for different environments for various projectsBefore you start, make sure to have sbt installed on your machine and accessible via the commandline. The code is available on github. If you start at the first commit you can just step-through the tutorial with each commit. Setup The first step is to setup our plugin project. There is only one important setting in your build.sbt, the rest is up to you. sbtPlugin := true This will mark you project as a sbt-plugin build. For this tutorial I’m using sbt 0.13.7-M3, which lets you write your build.sbt as you like. No need for separating lines. The complete build.sbt looks like this. name := "awesome-os" organization := "de.mukis"scalaVersion in Global := "2.10.2"sbtPlugin := true// Settings to build a nice looking plugin site site.settings com.typesafe.sbt.SbtSite.SiteKeys.siteMappings <+= (baseDirectory) map { dir => val nojekyll = dir / "src" / "site" / ".nojekyll" nojekyll -> ".nojekyll" } site.sphinxSupport() site.includeScaladoc()// enable github pages ghpages.settings git.remoteRepo := "git@github.com:muuki88/sbt-autoplugins-tutorial.git"// Scripted - sbt plugin tests scriptedSettings scriptedLaunchOpts <+= version apply { v => "-Dproject.version="+v } The plugins used inside this build are configured in the project/plugins.sbt. Nothing special. The Plugin Now we implement a first working version of our plugin and a test project to try it out. What the plugin will actually do is printing out awesome operating systems. Later we will customize this behavior. Let’s take our look at our plugin code. import sbt._ import sbt.Keys.{ streams }/** * This plugin helps you which operating systems are awesome */ object AwesomeOSPlugin extends AutoPlugin {/** * Defines all settings/tasks that get automatically imported, * when the plugin is enabled */ object autoImport { lazy val awesomeOsPrint = TaskKey[Unit]("awesome-os-print", "Prints all awesome operating systems") lazy val awesomeOsList = SettingKey[Seq[String]]("awesome-os-list", "A list of awesome operating systems") }import autoImport._/** * Provide default settings */ override lazy val projectSettings = Seq( awesomeOsList := Seq( "Ubuntu 12.04 LTS","Ubuntu 14.04 LTS","Debian Squeeze", "Fedora 20","CentOS 6", "Android 4.x", "Windows 2000","Windows XP","Windows 7","Windows 8.1", "MacOS Maverick","MacOS Yosemite", "iOS 6","iOS 7" ), awesomeOsPrint := { awesomeOsList.value foreach (os => streams.value.log.info(os)) } )} And that’s it. We define two keys. AwesomeOsList is a SettingKey, which means it’s set upfront and will only change if some explicitly sets it to another value or change it,e.g. awesomeOsList += "Solaris" awesomeOsPrint is a task, which means it gets executed each time you call it. Test Project Let’s try the plugin out. For this we create a test project which has a plugin dependency on our awesome os plugin. We create a test-project directory at the root directory of our plugin project. Inside test-project we add a build.sbt with the following contents: name := "test-project" version := "1.0"// enable our now plugin enablePlugins(AwesomeOSPlugin) However the real trick is done inside the test-project/project/plugins.sbt. We create a reference to a project in the parent directory: // build root project lazy val root = Project("plugins", file(".")) dependsOn(awesomeOS)// depends on the awesomeOS project lazy val awesomeOS = file("..").getAbsoluteFile.toURI And that’s all. Run sbt inside the test-project and print out awesome operating systems. sbt awesomeOsPrint If you change something in your plugin code just call reload and your test-project will recompile the changes. Add a new task and test it Next, we add a task which stores the awesomeOsList inside a file. This is something we can automatically test. Testing sbt-plugins is a bit tedious, but doable with the scripted-plugin. First we create a folder inside src/sbt-test. The directories inside sbt-test can be seen as categories where you put your tests into. I created a global folder where I put two test projects. The critical configuration is again inside the project/plugins.sbt addSbtPlugin("de.mukis" % "awesome-os" % sys.props("project.version")) The scripted plugin first plublishes the plugin locally and then passes the version number to each started sbt test build via the system property project.version. We added this behaviour in our build.sbt earlier: scriptedLaunchOpts <+= version apply { v => "-Dproject.version="+v } Each test project contains a file called test, which can contain sbt commands and some simple check commands. Normally you put in some simple checks like file exists and do the more sophisticated stuff inside a task defined in the test project. The test file for our second test looks like this. # Create the another-os.txt file > awesomeOsStore $ exists target/another-os.txt > check-os-list The check-os-list task is defined inside the build.sbt of the test project (/src/sbt-test/global/store-custom-oslist/build.sbt. enablePlugins(AwesomeOSPlugin)name := "simple-test"version := "0.1.0"awesomeOsFileName := "another-os.txt"// this is the scripted test TaskKey[Unit]("check-os-list") := { val list = IO.read(target.value / awesomeOsFileName.value) assert(list contains "Ubuntu", "Ubuntu not present in awesome operating systems: " + list) } Separate operating systems per plugin Our next goal is to customize the operating system list, so users may choose what systems they like most. We do this by generating a configuration scope for each operating system category and a plugin that configures the settings in this scope. In a real-world plugin you can use this to define different actions in different environments. E.g. develope, staging or production. This is a very crucial point of autoplugins as it allows you to enable specific plugins to get a different build flavor and/or create different scopes which are configured by different plugins. The first step is to create three new autoplugins: AwesomeWindowsPlugin,AwesomeMacPlugin and AwesomeLinuxPlugin. They will all work in the same fashion:Scope the projectSettings from AwesomeOSPlugin to there custom defined configuration scope and provide them as settings Override specific settings/tasks inside the custom defined configuration scopeThe AwesomeLinuxPlugin looks like this: import sbt._object AwesomeLinuxPlugin extends AutoPlugin{object autoImport { /** Custom configuration scope */ lazy val Linux = config("awesomeLinux") }import AwesomeOSPlugin.autoImport._ import autoImport._/** This plugin requires the AwesomeOSPlugin to be enabled */ override def requires = AwesomeOSPlugin/** If all requirements are met, this plugin will automatically get enabled */ override def trigger = allRequirements/** * 1. Use the AwesomeOSPlugin settings as default and scope them to Linux * 2. Override the default settings inside the Linux scope */ override lazy val projectSettings = inConfig(Linux)(AwesomeOSPlugin.projectSettings) ++ settings/** * the linux specific settings */ private lazy val settings: Seq[Setting[_]] = Seq( awesomeOsList in Linux := Seq( "Ubuntu 12.04 LTS", "Ubuntu 14.04 LTS", "Debian Squeeze", "Fedora 20", "CentOS 6", "Android 4.x"), // add awesome os to the general list awesomeOsList ++= (awesomeOsList in Linux).value ) } The other plugins are defined in the same way. Let’s try things out. Start sbt in your test-project. sbt awesomeOsPrint # will print all operating systems awesomeWindows:awesomeOsPrint # will only print awesome windows os awesomeMac:awesomeOsPrint # only mac awesomeLinux:awesomeOsPrint # only linux SBT already provides some scopes like Compile, Test, etc. So there’s only a small need for creating your very own scopes. Most of the time you will use the already provided and customize these in your plugins. One more note. You may wonder why the plugins are all getting enabled and we didn’t have to change anything in the test-project. That’s another benefit from autoplugins. You can specify requires, which define dependencies between plugins and triggers that specify when your plugin should be enabled. // what is required that this plugin can be enabled <span class="k">override</span> <span class="k">def</span> <span class="nf">requires</span> <span class="o">=</span> <span class="n">AwesomeOSPlugin </span> // when should this plugin be enabled <span class="k">override</span> <span class="k">def</span> <span class="nf">trigger</span> <span class="o">=</span> allRequirements The user of your plugin now doesn’t have to care about the order he puts the plugins in his build.sbt, because the developer defines the requirements upfront and sbt will try to fulfill them. Conclusion SBT Autoplugins make the life of plugin users and developers a lot easier. It lowers the steep learning curve for sbt a bit and creates more readable buildfiles. For sbt-plugin developers the process of migrating isn’t very difficult. Replacing sbt.Plugin with sbt.AutoPlugin and creating an autoImport field.Reference: SBT AutoPlugins Tutorial from our JCG partner Nepomuk Seiler at the mukis.de blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: