header-image

Codecool selects high-performers more efficiently with Benchmark.games

Codecool

CodeCool provides international programming courses with real-life team projects in mentor-led online classes. They have five campuses in four countries and more than 1000 graduates, whose digital, programming and tech careers they have helped to kick-start across the globe. Prospective students can choose from a 12-month full-stack programming training and a 4-month front-end course for people who want to learn programming to change their lives in today’s tech-driven work environment. Thus, Codecool’s students learn to program and develop their tech and soft skills, which will allow them to succeed in the modern world.

The five campuses of Codecool

The five campuses of Codecool

In October 2019, Codecool started using Benchmark.games’ game-based assessment during their selection process. The cooperation was preceded by a “phase 0 project“ whereby a custom benchmark profile was created for Codecool by assessing their recent and about-to-be graduates. The goal was to create a competency profile for both higher- and lower-performing students as measured by Benchmark.games’ game-based assessment.

Being present in multiple countries, Codecool’s selection process needs to be adaptable to regional specificities. Differences in regional context require a great degree of flexibility from Codecool’s recruitment team; all the while, it is essential that critical processes and methods are standardized.

Identifying the challenge

The online selection tool used previously was long, boring, frustrating, and less trustworthy

Before Benchmark.games, measuring analytical skills during the selection process at Codecool was a grey area. Krisztina Csepely, Codecool’s Student Recruitment & CS Manager, explains:

“At first, we used two logic tests: a longer one during the online selection phase and a mini in-person pre-interview test. Based on the feedback we received, these tests were too long, boring, difficult, and not quite user-friendly.”

According to Codecool, potentially good candidates must have been lost due to the length of the assessment. The test was exhausting, taking up to 60 minutes to complete. Re-assessment was also allowed, and in extreme cases, it took people up to 20 attempts to reach the minimum threshold. And so, the test was not only long and tedious, but on occasion, it lost its purpose altogether.

Krisztina also highlighted that they were uncertain about the traditional logic tests’ predictive power. In practice, this meant that although they set a minimum threshold that the applicants had to reach, the recruiters had no idea if those below the threshold were, in fact, unsuitable to become high-performing developers or those above the threshold had true potential.

Accordingly, they did not have much weight in the final decision. Thus, the Codecool team began looking for a candidate engagement and assessment tool that was more colorful and attractive, and more importantly, something that had accurate predictive power and measured various skills.

“Benchmark.games is what we were looking for.”

Trial period

Both logic tests had to go

Having campuses in different countries meant that the selection processes were not standardized and hard to compare.

At first, Codecool did not change anything about the original process; instead, they added Benchmark.games as an extra step. They did so to check the selection tool’s validity and how it compared to the online and pre-interview logic tests.

They found that Benchmark.games was, in fact, a reliable tool to measure cognitive abilities. Not only that, but many potential candidates were previously lost in the online pre-selection phase due to high churn rates. This can be attributed to the test being time-consuming, hard, and boring. However, with Benchmark.games, completion rates increased substantially.

And so, Benchmark.games replaced the conventional logic test in the online pre-selection phase, but eventually, both logic tests had to go.

Budapest, Codecool’s main campus, was the first location that used Benchmark.games as an exclusive tool to assess candidates’ competencies. At first, applicants were asked to complete the assessment before the in-person interview, so Benchmark.games became a precondition for an interview. Although regional differences remain in the structure of interviews, other locations gradually introduced Benchmark.games as well, and the process became standardized.

Once Benchmark.games’ solution was introduced at all Codecool locations, it began to have more weight in the final decision, and Codecool’s team initiated redesigning the whole selection process from start to finish.

“We knew that Benchmark.games had to have a place in our new selection process.”

Final implementation

The final decision is now made based on the interview and Benchmark.games’ assessment.

  1. Registration and short questionnaire - 15 mins
    Applicants have to provide some personal details and fill in a short questionnaire.
  2. Online interview - 30 mins
    Applicants attend an interview with a member of the Codecool team.
  3. Benchmark.games’ game-based assessment - 15-20 mins
    Applicants complete the game-based assessment.

Krisztina explained that “The final decision is made based on the interview and Benchmark.games’ assessment.” Not only that, but Benchmark.games is now more heavily weighted due to the reliability of quantitative psychometric data.

The selection also became much quicker: the whole process can be completed within a matter of days.

Key takeaways

Saving time

Saving time

By introducing Benchmark.games’ solution, Codecool has managed to save valuable time. Mentors now do not have to carry out a pre-interview in-person logic test, which was quite resource-heavy, demanding a couple of hours from each of them every week. As Krisztina put it, “time simply wasn’t well spent,” whereas now decision-making takes roughly 30-45 minutes less per candidate than it used to.

Efficiency

Efficiency

Previously, only about half of those who landed an interview were accepted. By introducing an efficient recruitment tool that provides reliable data on candidates’ skillset, this ratio was increased to around 75%.

Candidate experience

Candidate experience

“Candidates’ feedback is vital for us,” explains Krisztina. Candidates reportedly enjoy the selection process and, more specifically, the game-based assessment phase.

Some candidates find the assessment hard or frustrating, but they emphasise that it is because they recognise the weight of the game-based assessment.

Partnership

Partnership

“Working with the Benchmark.games team went smoothly, even at the beginning, which was a lot more intense as we were collecting data from our students and alumni and setting up a new selection process. They always replied quickly, answered all of our questions, and had a solution-focused mindset.”

More success stories

Want to see more case studies?