Software Development

Advantages of Functional Programming

In Part I of this series, I talked about the core concepts of functional programming and gave a few examples of how they come into play. The list of core concepts of functional programming (again from Part I) is as follows:

  1. Usage of functions as input to and output from other functions,
    higher order functions
  2. Usage of map, filter, and reduce type functions instead of looping
  3. Immutable state
  4. Recursion in place of looping
  5. Composing functions from other functions
  6. Distinguishing “pure” functions from functions with side effects

I gave a few examples using the first four items, so let’s get a little more down and dirty with those, adding in Number 5 as well.

Building an Efficient Chess AI with Elm

But first, a little information about chess programming:

Writing an AI for a chess-like game is quite an interesting undertaking. Real chess AIs have to be very complicated. Computers play chess in a much more brute force manner than humans do. They try to look at every possible sequence of moves and assign a value to the resulting position. This brute force approach is so work-intensive that without significant optimizations, an AI would take days to move and require huge amounts of memory as well.

The challenge of optimizing chess AI code has generated some wonderful algorithms like alpha-beta pruning and min-max searches. Deep Blue, the AI that beat world champion Garry Kasparov, had specialized “chess” chips built into it that made it many times faster than software-only solutions. Also, the Encyclopedia of Chess Openings contains the best moves for all of the popular openings, and these are hard coded into chess programs as data requiring no work on the part of the AI for the first ten to twenty moves of each game.

The AI for my pet project, Wounds (see Part I of this series), is very primitive by comparison. When evaluating a given position on the board, it will compare the strength of the armies (including wounded pieces), and it will compare their respective mobility just with a raw count of how many legal moves each team can make.

We will see how many moves we can look into the future without running out of some resource (including human patience).

For simplicity, let’s assume that the number of legal moves on each consecutive turn stays the same (it doesn’t; it usually increases with every move in the first part of the game). The first player, which in Wounds is the Red player, has about 30 legal moves at the start of the game. The AI would try each of these moves for ply 1, and then it would make each of the Blue team’s responses (ply 2) which would be 30 times 30, or 900. You see where this is going. It would next consider Red’s 27,000 second moves (ply 3) and Blue’s 810,000 replies to those (ply 4). It’s easy to run out of time and memory when you start looking farther than that.

Comparing Objective-C and Elm

When I wrote the code for this in Objective-C, before I had encountered functional programming, I saw that recursion would allow me to write clean code for this. When I implemented the recursive code, I found that at 5 plys deep the code would crash my iPad. There were two reasons for that: one, a copy of each board position needs to be saved before making the next move on a board, and two, Objective-C wasn’t optimized for recursion and so incurred additional overhead. Recursive calls (without optimization) eat up stack space, so that’s usually the weakest link.

Let’s hope this works better in Elm. Take a look at this:

makeAIMove : Player -> Board -> Board
makeAIMove player board =
  let
    moveList = getAIMoves board player
    scoredMobility = List.map (\move -> scoreMoveMobility board player move) moveList
    scoredMoves = 
    List.map (\move -> scoreMoveMaterial board player move) scoredMobility
    sortedMoves = List.sortBy .score scoredMoves |> List.reverse 
    maybeMove = List.head sortedMoves
  in
    case maybeMove of
      Just maybeMove ->
        let _ = Debug.log "score " maybeMove.score
        in
          makeMove board maybeMove
      _ ->
        board

Here we see a couple of calls to List.map, which takes a function as input. The function passed in also gets parameters if needed. And finally, a list is passed. The function will be called on every item in the list. List.map will return a list of results. In this case, I am passing around lists of moves. First it’s all of the legal moves, then those moves scored for mobility, then adding scores for material, then sorting so that we can grab the head of the list which will have the best score.

Notice how the result of each successive call is put into its own variable which gets passed to the next function. Here is my next attempt, where I utilize chaining to eliminate those temporary variables:

makeAIMove : Player -> Board -> Board
makeAIMove player board =
  let
    maybeMove =
    getAIMoves board player
    |> List.map (\move -> scoreMoveMobility board player move)
    |> List.map (\move -> scoreMoveMaterial board player move)
    |> List.sortBy .score
    |> List.reverse
    |> List.head
  in
    case maybeMove of
      Just maybeMove ->
        makeMove board maybeMove
      _ ->
        board

In Elm, the |> operator passes the output of the function preceding it as the last parameter to the function following it. If you compare the above code snippets, you will see that in the second snippet the parameter at the end of each line is missing. This parameter passing order is not standardized among languages.

For example, chaining functions in Elixir sends the output parameter as the first input parameter to the following function rather than as the last parameter. This looks similar to using a pipe operator that is used to connect UNIX command-line utilities. But under the hood so to speak, there is something more powerful going on.

Partially applied functions

I lied a little bit when I said the output of one function is passed to the next function. What is actually being passed is called a partially applied function. The act of passing these around is called currying.

Currying is named in homage to Haskell Brooks Curry, a pioneer in combinatory logic. Not only does the verb ‘to curry’ refer to him and his work, but there are three functional programming languages also named in his honor: Haskell, Brooks, and Curry. Of these three, only Haskell has a significant following.

Passing around partially applied functions like this is another reason for the name functional programming. Yet another reason this name is apt is that functional programmers are very careful about what they call side effects.

In object-oriented programming, it is commonplace for objects to contain internal state and for their methods to both read from that state and write to it. Code and data become interwoven. If this isn’t carefully designed, there can be all kinds of hard-to-foresee side effects.

Functional programmers like to be very careful about these side effects. Given the same input, a “pure” function always returns the same output. Furthermore it doesn’t affect application state. Since very few applications can work without some application state, that needs to be addressed somehow. So there is still state, but it is handled very carefully and conscientiously.

In React/Redux for example, you handle state by coding Redux functions called “reducers.” Then you create “actions.” When Redux receives an action, it carefully applies any middleware, then calls all of the reducers. Reducers can act on actions that they are interested in or just pass the buck, so to speak. React/Redux is not a purely functional toolchain, but the authors say that they have been heavily influenced by Elm’s architecture.

The elegance of immutable state

Let’s talk about immutable state again, and this time go a little deeper into the elegance of using it in your code. “Pure” functional programming tools enforce immutable state. Semifunctional tools, like React (with Redux), both strongly suggest and also assume that you will adhere to the immutable state design pattern. For example is ReactJS you are allowed to call

this.state = ‘new state’;

in a constructor() but subsequent state changes are supposed to use

this.setState(‘new state’);

This appears not to be enforced, so I can imagine myself introducing bugs by forgetting this rule. Kind of like when I have typed = instead of ==. I bet that’s never happened to you ;).

Functional languages like Elixir and Elm return an updated copy of data rather than modifying the data itself. Intuitively, this would seem expensive both in terms of processing and in terms of memory, but actually these tools are optimized so it’s actually more efficient rather than less.

Here’s my (edited for simplicity) Objective-C recursive code that searches for good AI moves:

- (Move *) pickBestMove: (Board *) board forPlayer: (Player *) player depth: (int) depth
{
    Move *bestMove = nil;
    NSMutableArray *moves = [self generateLegalMoves:board forPlayer:player];
    if(moves.count > 0)
    {
        bestMove = moves[0];
        BOOL alphaBeta = depth % 2 == 1;
        for(Move *move in moves)
        {
            move.score = 0;
            int interimValueScore = 0;
            if(move.defending_man != nil)
            {
                interimValueScore += move.defending_man.value;
            }
            if(self.searchDepth > depth)
            {
                if(move.defending_man != nil)
                {
                    if(depth < 4)
                    {
                        Board *tempBoard = [board copy]; 
                        [tempBoard makeMove:move];
                        Player *nextPlayer = [self getNextPlayer:player];
                        // here's the recursive call
                        Move *nextPlyBestMove = 
                [self pickBestMove:tempBoard forPlayer:nextPlayer 
                    depth: depth + 1];
                        if(nextPlyBestMove != nil)
                        {
                            interimValueScore += nextPlyBestMove.score;
                        }
                        [tempBoard unMakeMove:move];
                    }
                }
            }
            if(alphaBeta)
            {
                move.score += interimValueScore;
                if(move.score > bestMove.score)
                {
                    bestMove = move;
                }
            }
            else
            {
                move.score -= interimValueScore;
                if(move.score < bestMove.score)
                {
                    bestMove = move;
                }
            }
        }
    }
    return bestMove;
}

Each time I search a level (ply) deeper, I make a new copy of the board, then I make each move on that copy of the board and give the resulting position a score. This code was very hard to debug, requiring all kinds of instrumentation code to be able to stop at a certain depth and board position. It’s very slow, even on my iMac, which means it doesn’t play a very strong game.

Let’s go back to Elm for now, which requires that state be immutable. My first version of makeAIMove in Elm looked ahead one move when scoring mobility, and no moves when scoring material. This is very fast, but not too smart. It plays very aggressively, not considering how the opponent might respond. Here’s a smarter version that uses recursion to search to a depth controlled by an input parameter:

makeAIMove : Player -> Int -> Board -> Board
makeAIMove player searchDepth board =
  let
    maybeMove = pickBestMove player 1 4 board
  in
    case maybeMove of
      Just maybeMove ->
        maybeMove
       |> makeMove board
      _ ->
        board

The pickBestMove function makes a recursive call to itself until it reaches a search depth specified by the limit parameter:

pickBestMove : Player -> Int -> Int -> Board -> Maybe Move
pickBestMove player depth limit board =
  let
    legalMoves = getAIMoves board player
    alphaBeta = (rem depth 2) /= 1

    setMoveScore move player depth limit board =
      if depth < limit then
        let
          nextBoard = makeMove board move
          nextPlayer = (Player.getNextPlayer player)
          -- here's the recursive call
          nextPlyBestMove =
            pickBestMove nextPlayer (depth + 1) limit nextBoard
        in
          case nextPlyBestMove of
            Just nextPlyBestMove ->
              if alphaBeta then
                {move | score = move.score + (scoreBoard nextPlayer nextBoard)}
              else
                {move | score = move.score - (scoreBoard nextPlayer nextBoard)}
            _ ->
              move
      else
        {move | score = makeMove board move |> scoreBoard player}
  in
    List.map (\move -> setMoveScore move player depth limit board) legalMoves
    |> List.sortBy .score
    |> List.reverse
    |> List.head

In the let section at the top, I define a local function setMoveScore that will get called for each move in the legalMoves list. setMoveScore recursively calls the outer function, pickBestMove. It’s an interesting pattern to have a local/internal function call the outer function recursively. It helps to think of setMoveScore as just a block of code rather than a discreet function.

The pattern {move | score = makeMove board move |> scoreBoard player} shows how Elm responds to state changes. This snippet returns a new copy of move with the score updated. This fits in well with the List.map function call that creates a new list by applying a function to each item in the old list.

Likewise the function call nextBoard = makeMove board move returns a new board with the position changed. Player.getPlayer works the same way. This is quite useful since we want to be able to use the original board and player in the last line of this function.

We don’t need to be managing these temporary boards at each ply depth, and making sure we undo the moves we make. That’s how the Objective-C example works. That’s relying heavily on mutable state.

The immutable state version is less bug prone. Eventually it becomes easier to understand as well. When I first started to wrap my head around FP concepts, I found it harder to understand than my object-oriented code. Now that I feel more adept, I am seeing what the appeal is here: a kind of addictive quality, where a correct solution to a problem has a feeling of elegance. In that moment, everything clicks.

Coming Up

The next post in this series will delve into React and Redux.

Published on Java Code Geeks with permission by Eric Ford, partner at our JCG program. See the original article here: Advantages of Functional Programming

Opinions expressed by Java Code Geeks contributors are their own.

Eric Ford

Eric Ford was born and raised in Silicon Valley, the son of an inventor and an English professor. He's run Eric Ford Consulting since leaving Apple in 1996.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Back to top button