How to write one of the fastest expression evaluators in Java

Granted, the title is a bit of an attention grabber, but nevertheless true (You course you never trust a benchmark you didn’t fake yourself – but that’s another story).

So last week I was looking for a small and usable library to evaluate mathematical expressions. I almost directly stumbled upon this stackoverflow post. The recommended library (Expr) is really quite fast and had almost everything I needed. However, what it didn’t provide was the ability to limit the scope of variables (everything is in one global namespace within the VM).

Therefore I did, what one normally shouldn’t do: I reinvented the wheel and wrote my own parser / evaluator. It was a rainy saturday anyway so I thought a small recursive descending parser, an AST which simplifies and eventually computes expressions along with a little helper for managing variables doesn’t seem to be a big deal anyway. And it wasn’t. I had an initial implementation up and running quite fast. Once I had some tests giving me confidence that it computed everything the right way, I wanted to know how fast the evaluator was, compared to other libraries mentioned in the original post. Having not hand-optimized every inner loop and everything, I had’t much expectations, some of the libraries are commercial ones afterall. So I was quite suprised when I looked at the results. The list below shows a micro benchmark which evaluates the same expression using the respective library. The measurements for parsii, which is my library, were done using the final version, which performs some simplifications, like pre-evaluating constant expressions. However, no “black magic” like bytecode generation or anything in that league is done.

For a performance measurement the expression “2 + (7 – 5) * 3.14159 * x^(12-10) + sin(-3.141)” was evaluated with x running from 0 to 1000000. This was done 10 times to warm the JIT and then 15 times again of which the average execution time was taken:

  • PARSII:      28.3 ms
  • EXPR:        37.2 ms
  • MathEval:  7748.5 ms
  • JEP:        647.0 ms
  • MESP:       220.8 ms
  • JFEP:       274.3 ms

Now I’m sure, each of these libraries has their own strengths, so they can’t be directly compared. Still it’s amazing to see that a simple implementation can compete quite well.

For those of you who are not too deep into compiler contruction, here’s a small outline of how it works:

As any parser or compiler parsii uses the classic approach of having a tokenizer, which converts a stream of characters into a stream of tokens. Therefore “4 + 3 *8″ which is ’4′, ‘ ‘,  ‘+’, ‘ ‘, ’3′ , ‘ ‘, ‘*’, ’8′ as character array will be converted into:

  •  4 (INTEGER)
  • + (SYMBOL)
  • 3 (INTEGER)
  • * (SYMBOL)
  • 8 (INTEGER)

The tokenizer takes a look at the current charater, then decides what kind of token it is looking at and then reads all characters, which belong to that token. Each token has a type, textual contents and knows the position (line and character) where it started. A lot of in-depth tutorials are available on the net, so I won’t go into any details here. You can take a look at the source code, but as I said, it is just a basic naiive implementation.

The parser which translates the given stream of tokens into an AST (Abstract Syntax Tree) which can then be evaluated, is a classic recursive descending parser. This is one of the simplest ways to build a parser, as it is completely written by hand and not generated by a tool. A parser like this basically contains a method for every syntax rule.

Again a lot of tutrials for this kind of parsers are available. However, what most example leave out is proper error handling. Next to parsing an expression correcly and fast, good error handling is one of the central aspects of a good parser. And it’s not that hard: As you can see in the source code, the parser never throws an exception while parsing the expression. All errors are collected and the parser continues to go on as long as possible. Even though after the first error, the resulting AST cannot be evaluated correctly, it is important to go on as we can and should report as many errors as possible in one run. The same approach is used for the tokenizer, as reports malformed tokens, like decimal numbers with two decimal separators, to the same list of errors.

Evaluating an AST which is the result of a parsed expressions is quite easy. Each node of the syntax tree has an evaluate method which will be called by its parent node, starting from the root node. The result of eval here, is the result of evaluating the expression. A basic example of this approach can be found in BinaryOperation, which represents operations like +, -, * and so on.

In order to improve evaluation time a bit, three optimizations are performed:

First, after parsing the AST is reduced calling a method called simplify on the root node, which propagates to each child node. Each node then decides if a simpler representation of the own sub-expression can be found. As an example: For binary operations, we  check if both operands are constant (numbers). In that case, we evaluate the expression and returns a new constant containing the result of the operation. The same is done for functions where all parameters are constant.

The second optimization is done when using variables in expressions. The naiive approach here is to use a map and read or write the values of the variable when needed. While this certainly works, a lot of lookups while be performed. Therefore we have a special class called Variable which contains the name and the numeric value of the variable. When an expression is parsed, the variable is looked up once in the scope (which is basically just a map) and then used from now on. As each lookup returns the same instance, variable access when evaluating expressions is as cheap as a field read or write, as we just access the value field of Variable.

The third and last optimization won’t probably often come into play. But as it is simple to realize, it was implemented anyway. It basically goes by the name “lazy evaluation” and is used when calling functions. A function does not automatically evaluate all its arguments and then perform the function call iteself. It rather looks at the arguments and can deciede by iteself, which argument to evaluate and which not. An example where this is used, can be found in the if function.

parsii is licensed under the MIT license. All sources can be found on GitHub along with a pre-compiled jar.
 

Related Whitepaper:

Bulletproof Java Code: A Practical Strategy for Developing Functional, Reliable, and Secure Java Code

Use Java? If you do, you know that Java software can be used to drive application logic of Web services or Web applications. Perhaps you use it for desktop applications? Or, embedded devices? Whatever your use of Java code, functional errors are the enemy!

To combat this enemy, your team might already perform functional testing. Even so, you're taking significant risks if you have not yet implemented a comprehensive team-wide quality management strategy. Such a strategy alleviates reliability, security, and performance problems to ensure that your code is free of functionality errors.Read this article to learn about this simple four-step strategy that is proven to make Java code more reliable, more secure, and easier to maintain.

Get it Now!  

4 Responses to "How to write one of the fastest expression evaluators in Java"

  1. Davassi says:

    I wonder if evaluating the expression using a Reverse Polish notation approach could be even faster than the recursive one.

  2. Andy says:

    Well, as always it depends on what you measure. It will definitely be easier and faster to parse – concerning the evaluation there won’t be much of a difference since the internal data structures are the same. RPN is only another notation.

    That’s my guess so far. You’re more than welcome to try it out, I’d love to see a faster implementation than mine – there’s always something to learn ;-)

    cheers Andy

  3. Cd-MaN says:

    Hey!

    I beat your expression evaluator by a factor of 10x (ie. I have a 10 times faster evaluator). Check out the details here: http://www.transylvania-jug.org/archives/5777

    • Andy says:

      Hi Attila-Mihaly,

      nice work! However, just a few minor things: parsii can handle variables correctly but you need to attach the scope to the expression: parsii.eval.Parser.parse(“2 + (7-5) * 3.14159 * x + sin(0)”, scope);

      Also, it is now (since 2 days) available via maven: com.scireum.parsii-1.0

      It is amazing how fast a compiled version really is (as you can see, once parsed, parsii doesn’t do very much around it). One other benefit of parsii and Janino is, that an invalid expression results in a proper error message (instead of a “java.lang.Security” exception for fastexpr). However, I understand that you code is a prototypical implementation and this issue could be fixed easily. I’d say the most elegant approach would be parsing “by hand” and then manually generating bytecodes for the JVM (there are good libs for it). This would given you raw speed + security (no System.exit(0)….).

      Also thanks for introducing me to caliper – didn’t knew it yet – Really cool and easy to use.

Leave a Reply


four × = 16



Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

15,153 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books