Hello there! Over the course of the next few posts I’m going to discuss the process behind how we run our submissions and select games to showcase with us in the MEGABOOTH. As we’ve been growing, this has probably been the one question that I get asked most frequently. It’s been a bit of a black box until now, which was partly intentional and partly due to changing systems and discovering what the process actually was. I figured that now we’ve settled in, and with a new set of submissions on the horizon, it would be a good time to get together a written version of how this all works.
As with my previous posts this will be in depth – so if you enjoy reading then this post is for you! If not, then the TLDR is: It’s complicated.
To kick things off, I have a story.
I had a friend boast once that he prepared the ‘best’ hookah. I thought – well so far they’ve all been about the same, but this sounds fun. Bring it on! After much discussion and preparation it was finally ready. And it was incredible. Not even like smoking, closer to sucking down flavored velvet. Amazing! So of course, I asked – ‘What’s your secret?’
This prompted a lengthy response on various topics including cleaning hoses, choosing flavors, trial and error, and finesse — It turned out the big secret was lots of time, practice, dedication and a pile of tiny complicated steps that had been honed to perfection. Some secret, huh?
Not surprisingly, this is true of most amazing things I’ve come across. There isn’t a trick or some undiscovered step. It’s normally a long complicated process that has been agonized over and perfected by someone dedicated to its perfection. This overview is meant to be that: An explanation of all the nuanced thoughts and processes that go into the Indie MEGABOOTH submission reviews. It’s not a magic formula or quick satisfying answer. In short – it’s complicated.
I’m going this break this up into a couple different sections:
Alright! On to the History lesson. Class is in session.
When the MEGABOOTH first started at PAX East 2012 we had to essentially recruit people to participate. It was untested and new so there was risk in even the concept. After the inaugural event the teams who participated were interested in running it again and were starting to tell their friends about it. For the next 2-3 shows it was fully referral based. If a company we worked with prior recommended a game or a team then it would in most cases be part of the next showcase. The submission was informal and privately distributed. Four years ago the climate was different and indie studios that were attending large consumer events were ahead of the game so to speak. Personal recommendations from these teams held a lot more weight and 99% of the time were also really cool games with great teams behind them.
This reached a tipping point as the network started to grow from personal recommendations, into recommendations from a friend of a friend, finally to cold calls from companies that heard about us through the grapevine or saw us at the event itself.
The process was first formalized using a Google form (mostly to help me keep track of things in a spreadsheet rather than manually sorting through emails). I asked for basic information about the game so I could play it or take a look at their promotional materials. This was intended to establish a rudimentary quality bar, ensuring the games would be playable for attendees and make sure we were offering space to people who were actually committed to participating. I didn’t want someone just saying they wanted space only for it to turn out that the game wouldn’t be ready to show or that they weren’t serious and would drop out at the last minute.
This is also around the time when I started getting a small group of industry friends together to help me sort through the games (many of whom still help out to this day!). I’d have everyone take a look and provide their feedback in whatever context they wanted. There were no specific questions or voting systems. This free-form feedback is still the backbone of reviews today. Turns out that the different types of feedback and variety therein is just as interesting as how the game itself plays!
We typically run two submission sets per year (one for Spring events and one for Fall events) and receive about 250-300 submissions each time. I tallied up that Chris and I play, on average, about 5-600 unique games a year. As you can imagine, having a Google Doc or email thread keeping track of builds, screenshots, and written information is not feasible at that volume. Something had to be done.
After the initial Google Form process, we transitioned into using a form embedded directly on our website. This system allowed teams to upload items and link to relevant materials in a way that we could actually keep track of. This was also part of a major website overhaul (thanks to the super awesome Ryan Burrell), where we created a more robust infrastructure and moved into WordPress for usability. We implemented a new portal system for teams to provide front-facing information, which had been turning into another time sink – collecting press materials and media from 100 companies was… sub-optimal to say the least.
This further exported into Presskit with additional details on the team, game and reference media, allowing a casual user to find relevant information about each game such as launch dates, where to purchase, game trailers, etc. These materials were prepared in the lead-up to an event and fully available when we announced lineups for showcases.
Even though the overhaul and new submission form helped us to collect the information submitted by developers, it didn’t help with collecting reviews from judges. This hit a low point when Chris and I were working off a spreadsheet titled ‘Final Decision 4′, which was a combination of about a dozen reviewers’ feedback, a handful of sheets with our individual notes and then 3 previous ranking sheets. In short – a mess.
Around this time, we got talking with a developer who had created a full backend system to handle large volumes of game submissions. A choir sang, the clouds parted, a ray of light shone upon our weary faces. And it was good.
We’ve been using the new system for our last 3 submission sets and it has dramatically changed the amount of time spent on sifting through feedback and compiling final lineups. The actual submission review time is about the same but overall the feedback is more comprehensive. Whereas the decision making process has been cut down to about ¼ of the previous workload. This system also solved our communication issues with teams by adding an email tool that handles mass notices and keeps track of individual correspondences related to build tech issues. This also solved (rare!) instances where we would miss notifying someone of a decision or overlook responses due to the volume of information coming in or going out at once.
Overall this has allowed us to focus more effort and time on playing and reviewing games for selection rather than managing the systems and information. In the next post, I’ll talk about how that judging process plays out. Stay tuned!