ZJ Systems Portfolio

Backend · QA Platform · AI Systems

Lim Zhi Jian

Software engineer for QA platforms, internal systems, and AI-assisted engineering.

I build backend services, automation infrastructure, quality data pipelines, and LLM-enabled engineering tools that make complex delivery work visible and repeatable.

Kuala Lumpur, Malaysia · Remote-friendly / hybrid
Warm editorial systems artifact showing automation, quality data, CI, and LLM tooling panels.
788 production bookings

Slack-native license booking service snapshot, May 2026

36 QA users served

daily production booking workflow across the QA org

~290 Appium commits

primary maintainer signal in the mobile automation framework

~152 ETL commits

quality scorecard and engineering metrics pipeline

Featured System

Race-free booking in a Slack-native service.

The license booking service is the cleanest proof of product-minded backend work: a real internal pain, a transactional core, Slack UX, lifecycle jobs, and public-safe production numbers.

Slack-native license booking artifact with booking modal, lifecycle states, capacity timeline, and metrics.

Case Studies

Each project is framed as a system, not a bullet.

The structure is deliberately hiring-manager friendly: problem, what I built, technical stack, measurable outcome, and one engineering tradeoff.

Redacted portfolio artifact showing a Slack booking modal, booking lifecycle, capacity timeline, and usage metrics.
01 Backend workflow system

Atomic booking for contended QA licenses

Problem
A small paid-license pool was being managed through a legacy Apps Script and spreadsheet flow. The old check-then-create path could race under concurrent booking attempts, and seat ownership was hard to see.
System Built
I shipped a Slack-native service with modal-driven booking, transactional capacity allocation, idempotent Slack retries, lifecycle states, background expiry jobs, and public-safe audit records for each reservation.
Stack
FastAPI · Slack APIs · SQLAlchemy · Docker · GitOps · pytest
Outcome
Shipped in February 2026 and in daily production use for the full QA org, handling 788 bookings across 36 QA users and 8 active seats at the May 2026 snapshot.
Tradeoff
I chose a transactional booking model over a spreadsheet-like reservation list, because the hardest bug was not UI friction; it was capacity correctness under simultaneous requests.
Styled Appium mobile automation artifact with contribution timeline, CI evidence, and Android/iOS execution context.
02 Automation infrastructure

One test file, two mobile platforms

Problem
Mobile automation was expensive to maintain when platform differences leaked into every test. Test authors needed a cleaner model for dual-platform coverage, device setup, reporting, and failure triage.
System Built
I helped architect platform-dispatched page objects, shared driver fixtures, pre-session test data setup, BrowserStack execution, S3-published Allure reports, PR-scoped test selection, and agent-friendly repo conventions.
Stack
Python · pytest · Appium · BrowserStack · Allure · Karate · GitHub Actions
Outcome
The framework became a shared automation base, with roughly 290 authored commits from me, Android parallelism set to 5, and fixes that raised BrowserStack app discovery from 50 to 100 builds.
Tradeoff
I favored explicit platform dispatch over duplicated Android and iOS test trees, trading a little abstraction work for long-term authoring consistency and lower maintenance cost.
Quality Scorecard ETL artifact showing source systems, pipeline stages, Snowflake models, and dashboard panels.
03 Engineering data platform

Quality signals leaders can actually read

Problem
Quality signals were scattered across repositories, SonarCloud, GitHub, CI artifacts, and manual reporting. Leaders needed a consistent view without asking every team to hand-roll status updates.
System Built
I contributed ETL scripts, GitHub and SonarCloud enrichment, Snowflake models, dbt-ready structures, BI scorecard inputs, and CI rules for performance-test adoption across services.
Stack
Python · Snowflake · dbt · GitHub APIs · SonarCloud · BI dashboards
Outcome
The scorecard pipeline connected quality engineering work to measurable organizational visibility, with roughly 152 authored commits and cross-org integration work delivered.
Tradeoff
I treated the pipeline as a product interface, not just ETL: schema clarity and scorecard semantics mattered as much as extraction because the audience was both technical and leadership-facing.
Pact contract testing pilot artifact showing consumer service, Pact Broker, provider verification, and CI gate.
04 Backend reliability

Release gates for service boundaries

Problem
Service-to-service breakages were discovered too late because integration testing did not expose a machine-readable contract that deployment could trust.
System Built
I stood up a self-hosted Pact Broker on Kubernetes, wrote consumer/provider verification flows, wired can-i-deploy checks into CI, and documented the rollout pattern.
Stack
Pact JVM · Pact Broker · Helm · Kubernetes · PostgreSQL · GitHub Actions
Outcome
First contract testing pair delivered, backed by 9 broker commits, 6 provider-service PRs, and 1 consumer-service PR.
Tradeoff
I used WIP and pending pact semantics so PR-stage contracts could be verified early without making adoption feel brittle for service teams.
Zephyr Scale backup visualizer artifact showing a catalog, execution table, detail panel, metrics, and media preview.
05 Internal developer tool

Owning test evidence outside a vendor UI

Problem
Release evidence lived behind vendor UI paths and authenticated attachment endpoints, making audits, retrospectives, and long-term ownership painful.
System Built
I built a Node CLI that combines REST exports with Playwright session reuse for UI-only attachments, then paired it with a Next.js explorer over local or S3-backed archives.
Stack
Node.js · Playwright · AWS S3 · Next.js · React · shadcn/ui · GitHub Packages
Outcome
The toolchain made test cycles, executions, step results, and attachments portable, reviewable, and runnable through an internal npx package.
Tradeoff
I combined official APIs with browser-session attachment retrieval because the valuable evidence was split across supported and UI-only surfaces.
Generated systems artifact showing LLM tools, automation, quality data, CI, and evidence panels connected in a portfolio architecture map.
06 AI engineering

LLM output with engineering guardrails

Problem
Generic generated test cases were not enough. The output needed to match the org's SDLC, test pyramid, critical-user-journey model, and source traceability expectations.
System Built
I authored the organization-plugin layer, comparison docs, and integration notes that turn shared AI generation into output shaped by local engineering practice.
Stack
Python · OpenAI APIs · Cursor rules · Jira · Confluence · Figma MCP
Outcome
The work connects GenAI exploration to practical software-engineering quality: repeatable prompts, organization rules, traceable inputs, and reviewable generated cases.
Tradeoff
I kept the AI layer opinionated instead of generic, because quality work depends on context: SDLC shape, risk categories, and release expectations all change the right test plan.

Technical Depth

The range is QA-shaped, but the work is systems engineering.

Backend and Internal Systems

FastAPINode.jsSQLAlchemySlack APIsREST APIsDocker

Automation Infrastructure

PythonpytestAppiumKarateAllureBrowserStack

Quality Data and Release Gates

SnowflakedbtPactHelmGitHub ActionsS3

AI Engineering

OpenAI APIsLLM workflowsCursor rulesMCPeval thinkingagent docs

Build the system, then prove it

The strongest work pairs architecture with evidence: screenshots, data, tests, runbooks, and production usage.

Make quality visible

Automation is more valuable when it becomes a signal that engineers, managers, and release owners can act on.

Use AI with guardrails

LLM tools should inherit engineering context: source traceability, validation paths, repo conventions, and honest failure modes.