The Indie MEGASHOW

Coming to Atlanta, GA

July 15th - Click Here For Info!

All About: Submissions – Part 3

SubHeader
/

(Read part one, two or three of All About Submissions here)

Now that we’ve reviewed the types of materials we request, we’ll get into who evaluates the materials and how their feedback impact a submission.

Judge Selections

We maintain a relatively small pool of judges – but we select judges based on specific factors to get a wide variety of feedback. I’d generally put them in the following categories:

Power Users – Judges that will review more than 50% of the games. They provide insight into overall quality of submissions compared to their peers, run QA and provide quick feedback on most games and detailed feedback on a sub-set of particularly unique games.

Specialized – Judges that specialize in a specific genre or subset of the industry. RPGs are a good example of this: They’re hard to evaluate in short gameplay sessions and take a very long time to develop for small teams, which means it’s tough for judges to tell how far along the game actually is unless they have extensive experience in the genre. Educational games, Fighting Games and Interactive Fiction are other good examples.

Marketing – Judges that provide feedback mostly on marketing viability and promotional materials. This normally overlaps with Power Users since they provide quick impressions on a large volume of entries.

Non-Industry – These are unaffiliated friends, spouses, group play testers for multiplayer games, children or family members. This helps to provide us with feedback on games that are casual or non-hardcore genres. They also provide unbiased feedback on games where the company or game is well known in industry circles (which biases a lot of industry feedback).

Long Term – These judges help to give context and high level feedback on long term trends and comparative quality level. Generally it’s the core MEGABOOTH team and Power Users from multiple submission sets.

Games are assigned to judges based on their platform access (an iOS game will not be assigned to someone with only a Windows PC) and game genre. Judges are assigned a set of games and once they’ve successfully reviewed the assigned games they receive a new set. We aim to have a handful of reviews per each submission and will manually assign or personally re-review games that we felt didn’t get enough attention (more on this later).

The judges have access to all of the materials you submitted, as well as a discussion board viewable to other judges, and access to anonymously email the developer about technical issues.

screenshot-judges

They’re asked to review the materials and vote on any (or all) of the following categories for each game:

Game – Does the game present a unique, interesting, or thoughtful design or mechanic? Is the aesthetic style compelling? Does the game address a social issue or offer a unique perspective or philosophy on game design? Does the game represent a larger genre in a compelling way, or an under-represented genre?

Company – Would the company be a good fit for the MEGABOOTH? Would they either benefit from the community support or act as a mentor for new participants? Do they represent something important outside of their game? Are they organized and easy to work with or support the community and promote positive interactions, etc?

Presentation – Would the game/company show well at a conference (ie the game has special hardware presentable in person only, the booth plans are interesting, the game has a visual style that would draw fans to the space, etc.) Is the submission thoughtful and presented correctly to their audience? This is essentially an indicator of how successful they are with marketing their game and taking advantage of opportunities available to them.

Judges are also asked to provide additional comments or feedback on their votes. This feedback is ONLY viewable to Admins who make the final decisions on game selections (ie Christopher and myself plus one or two technical Admins). This is to prevent confirmation bias in the feedback and to allow the judge to be more honest in their opinions to us. As I mentioned in the first post, we keep the feedback free-form so someone can say as much or as little as they like about whatever aspect of the game and submitted materials they feel is relevant. The vote categories are used more as an indicator of a submission’s overall appeal and used at the start of the final selections process to sort by a measurable data point.

An Example Review

To make this easier to visualize, we’ll follow the path of an example game which I’ll call KellySimGame. I’m going to walk through this as if we’re using the new review system and submitting for PAX.

Vote: Game

I really love management sims and this example is a well thought-of version of something along the lines of SimTower. They’ve made some interesting changes to the elevator management system that I think make it more intuitive overall. There’s some flaws with the building menu and the camera controls are a bit wonky but I think with some polish and play testing this could all get ironed out. Overall a recommend from me!

A mechanics-based review from someone who has experience with management sims. They provide specific feedback about systems in the game along with an opinion on what the game could be and where there’s room for improvement. They only voted for ‘Game’ but this would hold more weight because of the their deep experience with management sims.

Vote: None

The trailer looks boring and the art is terrible. The game runs fine but I don’t get why you would ever want to schedule elevators. I guess if you’re into this kind of thing maybe it’s more fun.

This one may seem useless/blunt but actually highlights and important point. There are tens of thousands of people who attend these events and hundreds of things pulling at their attention. People make snap judgements all the time on whether they’ll actually sit down to play a game or take time to listen to a pitch (press included!). If the game isn’t appealing or their marketing materials are poor, it will have a higher chance of getting overlooked regardless of anything we do to help support it.

Vote: Company

I know Kelly Inc! They’re part of the Canary Island scene and Kelly has helped with organizing local meetups. She’s hard-working and, I think, would be really helpful with the space. It would do them good to meet people outside of the Canary Islands.

The third review says nothing about the game and instead gives some insight into what the company is like to work with. This feedback shows that in addition to the game being fun, the company itself is motivated and community driven. We research the companies themselves pretty thoroughly and take their local community into consideration. In this example the Canary Island scene is isolated and may not have connections to the greater global community. As a local community organizer, if they participate in the event then they will have the opportunity to meet teams and partners from all over the world – which in the long run will positively impact their local community.

Vote: Company, Game, Presentation

Game runs fine and seems to be put together. I’m sure people will like it and the company does a lot of work for the Canary Island indies. Why not?

This review is actually the least useful out of the group in regards to feedback but does help to confirm the other reviews regarding game quality and company reputation. On the flip side, this final piece of feedback could have been:

Vote: None

I can’t get the game to run consistently and it always crashes when I try to send an elevator to floor 4. Kelly Inc. is run by the same team of people from Bob Inc, that went bankrupt in 2005 after they spent all their money buying nerf guns and segways. I’m not sure what the deal is now but it was quite a mess when they went under before.

In this final case, we perform some additional research into the company and another round of play-testing to help get a clearer picture of the game quality and company risks.

In cases where reviews wildly disagree with each other it can indicate a few things (all of which are useful in their own right):

  • The company is popular but the game is not particularly good.
  • The game is experimental but could show really well.
  • The game is too early to show or not fully executed enough to make a final call.
  • The reviewers did not fully evaluate the submission and are providing feedback on promotional materials only.

In any of these cases, we take a second look at the entire submission, and/or take the opposing opinions into consideration when weighing how a game will generally be received.

Our goal with judge selections and feedback is to collect as wide a variety in specific opinions and targeted feedback as possible. Since the pool of judges is relatively small, we rely on quality feedback and work to contextualize it based on the judges’ expertise and review notes. We then couple this with hard data from the voting categories to provide a comprehensive impression of the submission.

In the final post, I’ll talk about the big one: how we make selections!

About the Author

Kelly is the founder of Indie MEGABOOTH, a showcase that brings indie games into the heart of conferences previously dominated by AAA budgets and works to create support networks for small development teams. She's involved in local community building along with creating cross community networks and acts as an advocate for indie developers with platform holders, distributors, publishers and press. The MEGABOOTH's current focus is on expanding community support efforts and addressing discoverability issues for indie games.