Why Signal Exists

What Signal changes about hiring

The resume made sense when applying for a job was expensive. You printed it, mailed it, or hand-delivered it. Candidates applied to a handful of roles they actually wanted. Employers got a manageable pile and read them carefully. The friction was real, so the signal was real.

That world is gone. AI writes resumes in seconds. ATS systems screen them automatically. Candidates apply to hundreds of roles with one click. Nobody is evaluating a human anymore. AI generates the applications, AI filters them, and the whole thing is just noise.

Wealthsimple figured out the escape: require proof of work. You can generate a resume with a prompt. You cannot generate a working system with one. Asking candidates to build something real immediately separates the people who want this job from the people who are applying to everything.

The problem is it doesn't scale. Signal fixes that.

What the human can now do

A hiring manager running Wealthsimple's process today has 48 hours to review every submission. At 8-10 minutes per submission (video, written explanation, demo URL), you get through roughly 100 before attention starts to degrade. With Signal, each card takes 45 seconds to scan. The same person covers 10 times as many submissions without their judgment dropping. The human isn't doing less work, they're doing better work. Instead of processing submissions, they're making judgment calls. That's the job that actually needed a human. Signal just makes sure they have the energy to do it well.

What AI is responsible for

Signal watches the demo videos, transcribes the audio, and determines whether the candidate built something real or mocked something up. It scores written explanations against a rubric. It produces a summary of every submission. It drafts a personalized rejection for every candidate who doesn't advance. The processing happens before the human ever opens the dashboard.

Where AI must stop

The offer decision. Hiring has legal weight. It requires reading someone's potential, not just their output. A strong demo doesn't guarantee the right person. A weak demo doesn't disqualify one either. That judgment belongs to a human. AI surfaces the signal, the human decides what it means.

What breaks first at scale

The evaluation pipeline. Right now, Signal works well because the rubric criteria are explicit and the submissions are concrete. At scale, with hundreds of different roles and thousands of submissions, small errors in how the AI scores compound. A rubric that slightly underweights the wrong thing produces a consistently skewed shortlist. Nobody notices because the dashboard looks confident. The risk isn't one bad decision, it's a pattern of bad decisions that's hard to trace back to the rubric. Fixing it requires humans staying close to the output, spot-checking evaluations, and treating the AI scores as a starting point rather than a verdict.

The goal isn't to make hiring easier to automate. It's to make it harder to fake.

Built by Zack Dorward