Trupeer AI - Create professional product videos and guides
logo

InterviewLM Demo

Dec 4, 2025

11 Views
0 Comments
0 Reactions
Loading video...

InterviewLM Platform Overview

This document describes how hiring managers can use InterviewLM’s local setup to create AI-assisted coding assessments, invite candidates, review their performance, and analyze interview outcomes.


Step 1: Log In to the InterviewLM Platform

Log in to your locally hosted InterviewLM instance using your hiring manager (HR/HM) account credentials.

Screenshot Screenshot


Step 2: Access the Hiring Manager View

After logging in, access the hiring manager view, which provides tools to manage assessments, candidates, and interview results.

Screenshot


Step 3: Create a New Assessment from Curated Questions

Use the hiring manager view to create a new assessment.
Select questions from the already curated list of technical questions prepared in the system, and configure the assessment according to the role and skill level you want to evaluate.

Screenshot Screenshot

Once the assessment is ready, invite candidates to attend the interview using your preferred communication or integration method.

Screenshot


Step 4: Review Completed Interviews

After candidates complete their interviews, open any session to review how the interview looked from the candidate’s perspective, including their environment and interactions during the session.

Screenshot


Step 5: Understand the Candidate Sandbox and Tracking

For each interview, the candidate is provided with a secluded sandbox environment.
In this sandbox, they can install any required framework of their choice, or the framework mandated for the interview. As soon as they select a framework and start working, the platform begins tracking all of their interactions with the environment.

Screenshot


Step 6: Use Dynamic Questions and Continuous Evaluation

Questions in the assessment are dynamic: as candidates solve problems, the platform can generate and present additional questions based on their performance.
Each time the candidate runs their code or progresses, the system evaluates what they have completed up to that point.

Screenshot

Candidates may either chat with the integrated AI (e.g., Claude Code) to generate or refine code, or write the code themselves, whichever is more efficient for them.


Step 7: Leverage AI for Coding and System Design

The integrated AI assistant (currently Claude Code, with plans to support other leading models) can write and solve code within the sandbox.

Screenshot Screenshot

Candidates can also ask the AI to propose system or component designs. They can then evaluate whether the AI’s suggested design and code are appropriate and correct, and choose to adopt, modify, or replace them. The platform observes how effectively the candidate utilizes AI support as part of their assessment.


Step 8: View Scoring and Candidate Levels

Once the interview is complete, move to the scoring section for that session.

Screenshot Screenshot

Each session receives a detailed score summarizing what happened during the interview, including:

  • How much the candidate accomplished
  • Identified red flags and green flags
  • How the AI-derived scoring was computed

Screenshot Screenshot

Based on these evaluations, the platform places the candidate into a specific level or “layer,” indicating their seniority and overall capability.

Screenshot Screenshot

A replay feature (in development) will allow you to replay the entire interview session to see exactly how the candidate interacted with the environment and AI.

Screenshot


Step 9: Explore Analytics and Customize Scoring

Use the analytics section to review how your interviews and assessments are performing over time, including trends across candidates and roles.

Screenshot

Configure custom scoring parameters to align the evaluation with your organization’s specific criteria and weighting for skills, behaviors, and AI usage.

Screenshot Screenshot

U