Our browser-based puzzle game about hacktivism, Exploit: Zero Day, has two major components: Jobs and player-created puzzles. Jobs are what a traditional MMO would call PVE challenges; they're story crafted by us, which you play through alone. Player-created puzzles, on the other hand, are currently the closest thing we have to PVP challenges, although in our case the goal isn't really to defeat the other player but to give them an interesting challenge.
We're currently developing currency mechanics that will serve as an extra incentive for players to create puzzles and solve other players' puzzles (we call puzzles "systems," since they represent computer systems in the game's fiction). Players will be able to earn "scryp" by solving puzzles or having puzzles in their home cluster solved, which they can spend to make their home cluster more attractive and challenging. A big question arises, however. How do we set these rewards to encourage people make the best systems they can?
The Wrong Way
For a while, in our closed alpha testing, we ranked players' solutions by how quickly they completed puzzle systems. Each system had a leaderboard based on completion time. Soon, one of our more tech-savvy players wrote a script that allowed him to get near-optimal scores. At one point, I think he was at the top of the leaderboard for every public system. At the same time, due to the way the script ran, we were swamped with error reports from the times his trial-and-error script had an error. We took this incentive away and the error reports immediately stopped.
Our reward system has to reward actually clever play that brings value to the game. Having the quickest possible solution isn't something we particularly want to encourage. Having a solution of a certain speed, perhaps, but what we really want to do is reward players more for solving challenging systems, and reward creators for making those systems. That's the sort of incentive structure that should lead to interesting results and competition among players and creators.
But it matters how we judge that challenge. Imagine the trivial case where we measure a system by the number of nodes it contains. That would result in puzzles like this one being rated the most challenging:
That's certainly a challenging system, and having some of those is a good thing, but we don't want to encourage every system to be like that one. So our "challenge" algorithm has to take into account more than just quantity.
How to Judge Challenge
I won't be sharing our exact calculation, as that will encourage people to game the system. What I'll do is lay out the general principles we're using as we're calculating system challenge for the reward system.
We actually do want to incorporate how much stuff is in a system; we call this "complexity." This does mean that the rules are weighted in favor of larger, fuller systems, but we offset that by requiring more scryp to install more complex systems in your home cluster. This will hopefully encourage players to be judicious and not add extra nodes everywhere just to bump their complexity up a bit.
Complexity can be a poor judge of challenge, however. One can easily imagine a system that is quite complex but not actually difficult, like some of the ones shown off in this thread. Likewise, you could have a fiendish puzzle in a pretty tidy space, like the ones in this cluster (which you can play without needing alpha access!)
You can get a good idea of challenge by looking at actual solutions. Theoretically, the more pings (actions) a puzzle requires to solve, including mistaken pings, the more challenging it is; likewise, it's more challenging the more nodes and ports that are actually used as part of the solution (and not just included for decoration or as red herrings). You might imagine that you could just use the creator's solution to judge this. However, that would leave an opening for abuse.
Creators have to solve a system before they publish it, but there's no good way to make sure that they provide anything like an optimal solution. A creator who understands the challenge rules could deliberately solve their system poorly, firing a bunch of useless pings to make their system look more challenging. They could also add a bunch of boring, unnecessary nodes and trigger them during their solution, even though their real solution is simpler. This doesn't only apply to malicious or exploitative creators, either; imagine a creator missing an obvious solution to their puzzle and solving it in an overly-complicated way.
To avoid this, we won't be using the creator's solution to judge complexity. Instead, we'll be examining the average solution out of everyone who solves the system. As more players solve a system, this average will become more accurate. Specifically, we're calculating the median, not the mean, since we want to prevent outliers from affecting the calculation too much.
The Big Algorithm
Our equation for system solution rewards, then, is a weighted combination of three factors:
- The system complexity, based on the number of nodes and ports in the system
- The median number of pings required to solve the system
- The median number of ports and nodes used in the system's solution.
Once we have this reward, we bound it so that no system reward is too low or too high, to avoid any weird outliers. This isn't a curve, which could let one unusual system distort all others; we just apply a floor and a ceiling to ensure that every system provides a reward but not too much of one.
Creators earn scryp for each solution based on the reward given to the solver. Right now, they earn one tenth of the solver's reward, which means that making a system that ten people solve is equivalent to solving one similar system. We'll probably tweak this proportion up and down based on what we observe of player behavior during testing.
We do want to add some incentive to replay systems and master the rules, so we award extra scryp to players who solve a system faster than the creator or in fewer pings than the median. Likewise, the creator gets some extra scryp for each player who doesn't beat their time.
You'll notice the reintroduction of solution time there. We do think there's some value in encouraging players to solve systems fast, but we want to adjust the importance of that aspect so that it's a nice bonus but not something folks feel compelled to spend a lot of effort on. If we see people using intensive automated approaches again, we can always take it out.
Once this is all complete, we hope that players will feel incentivized to create, share, and seek out systems. We may even reintroduce a leaderboard to the game, this one based on players' total wealth in scryp. Instead of being a measure of reflexes (or coding skill), it'll be a measure of their contribution to the game.