
This document describes how hiring managers can use InterviewLM’s local setup to create AI-assisted coding assessments, invite candidates, review their performance, and analyze interview outcomes.
Log in to your locally hosted InterviewLM instance using your hiring manager (HR/HM) account credentials.

After logging in, access the hiring manager view, which provides tools to manage assessments, candidates, and interview results.

Use the hiring manager view to create a new assessment.
Select questions from the already curated list of technical questions prepared in the system, and configure the assessment according to the role and skill level you want to evaluate.

Once the assessment is ready, invite candidates to attend the interview using your preferred communication or integration method.

After candidates complete their interviews, open any session to review how the interview looked from the candidate’s perspective, including their environment and interactions during the session.

For each interview, the candidate is provided with a secluded sandbox environment.
In this sandbox, they can install any required framework of their choice, or the framework mandated for the interview. As soon as they select a framework and start working, the platform begins tracking all of their interactions with the environment.

Questions in the assessment are dynamic: as candidates solve problems, the platform can generate and present additional questions based on their performance.
Each time the candidate runs their code or progresses, the system evaluates what they have completed up to that point.

Candidates may either chat with the integrated AI (e.g., Claude Code) to generate or refine code, or write the code themselves, whichever is more efficient for them.
The integrated AI assistant (currently Claude Code, with plans to support other leading models) can write and solve code within the sandbox.

Candidates can also ask the AI to propose system or component designs. They can then evaluate whether the AI’s suggested design and code are appropriate and correct, and choose to adopt, modify, or replace them. The platform observes how effectively the candidate utilizes AI support as part of their assessment.
Once the interview is complete, move to the scoring section for that session.

Each session receives a detailed score summarizing what happened during the interview, including:

Based on these evaluations, the platform places the candidate into a specific level or “layer,” indicating their seniority and overall capability.

A replay feature (in development) will allow you to replay the entire interview session to see exactly how the candidate interacted with the environment and AI.

Use the analytics section to review how your interviews and assessments are performing over time, including trends across candidates and roles.

Configure custom scoring parameters to align the evaluation with your organization’s specific criteria and weighting for skills, behaviors, and AI usage.
