Slack-native license booking service snapshot, May 2026
Backend · QA Platform · AI Systems
Lim Zhi Jian
Software engineer for QA platforms, internal systems, and AI-assisted engineering.
I build backend services, automation infrastructure, quality data pipelines, and LLM-enabled engineering tools that make complex delivery work visible and repeatable.
daily production booking workflow across the QA org
primary maintainer signal in the mobile automation framework
quality scorecard and engineering metrics pipeline
Flagship Work
Systems that turn QA experience into software engineering proof.
These projects lead with architecture and outcomes: backend workflows, automation infrastructure, data pipelines, release gates, and AI-assisted engineering practice.
Backend workflow system
Slack-Native License Booking Service
A production Slack workflow for booking scarce QA testing seats with atomic capacity checks, automated lifecycle jobs, auditability, and deterministic handling of contention.
Automation infrastructure
Appium Mobile Automation Framework
A Python, pytest, and Appium framework that lets QA engineers write one test flow and run it across Android and iOS with platform-aware page objects, parallel execution, and report publishing.
Engineering data platform
Quality Scorecard ETL
An engineering-quality data pipeline that normalizes source-system signals into scorecards for service health, adoption tracking, and leadership visibility.
Featured System
Race-free booking in a Slack-native service.
The license booking service is the cleanest proof of product-minded backend work: a real internal pain, a transactional core, Slack UX, lifecycle jobs, and public-safe production numbers.
Case Studies
Each project is framed as a system, not a bullet.
The structure is deliberately hiring-manager friendly: problem, what I built, technical stack, measurable outcome, and one engineering tradeoff.
Atomic booking for contended QA licenses
- Problem
- A small paid-license pool was being managed through a legacy Apps Script and spreadsheet flow. The old check-then-create path could race under concurrent booking attempts, and seat ownership was hard to see.
- System Built
- I shipped a Slack-native service with modal-driven booking, transactional capacity allocation, idempotent Slack retries, lifecycle states, background expiry jobs, and public-safe audit records for each reservation.
- Stack
- FastAPI · Slack APIs · SQLAlchemy · Docker · GitOps · pytest
- Outcome
- Shipped in February 2026 and in daily production use for the full QA org, handling 788 bookings across 36 QA users and 8 active seats at the May 2026 snapshot.
- Tradeoff
- I chose a transactional booking model over a spreadsheet-like reservation list, because the hardest bug was not UI friction; it was capacity correctness under simultaneous requests.
One test file, two mobile platforms
- Problem
- Mobile automation was expensive to maintain when platform differences leaked into every test. Test authors needed a cleaner model for dual-platform coverage, device setup, reporting, and failure triage.
- System Built
- I helped architect platform-dispatched page objects, shared driver fixtures, pre-session test data setup, BrowserStack execution, S3-published Allure reports, PR-scoped test selection, and agent-friendly repo conventions.
- Stack
- Python · pytest · Appium · BrowserStack · Allure · Karate · GitHub Actions
- Outcome
- The framework became a shared automation base, with roughly 290 authored commits from me, Android parallelism set to 5, and fixes that raised BrowserStack app discovery from 50 to 100 builds.
- Tradeoff
- I favored explicit platform dispatch over duplicated Android and iOS test trees, trading a little abstraction work for long-term authoring consistency and lower maintenance cost.
Quality signals leaders can actually read
- Problem
- Quality signals were scattered across repositories, SonarCloud, GitHub, CI artifacts, and manual reporting. Leaders needed a consistent view without asking every team to hand-roll status updates.
- System Built
- I contributed ETL scripts, GitHub and SonarCloud enrichment, Snowflake models, dbt-ready structures, BI scorecard inputs, and CI rules for performance-test adoption across services.
- Stack
- Python · Snowflake · dbt · GitHub APIs · SonarCloud · BI dashboards
- Outcome
- The scorecard pipeline connected quality engineering work to measurable organizational visibility, with roughly 152 authored commits and cross-org integration work delivered.
- Tradeoff
- I treated the pipeline as a product interface, not just ETL: schema clarity and scorecard semantics mattered as much as extraction because the audience was both technical and leadership-facing.
Release gates for service boundaries
- Problem
- Service-to-service breakages were discovered too late because integration testing did not expose a machine-readable contract that deployment could trust.
- System Built
- I stood up a self-hosted Pact Broker on Kubernetes, wrote consumer/provider verification flows, wired can-i-deploy checks into CI, and documented the rollout pattern.
- Stack
- Pact JVM · Pact Broker · Helm · Kubernetes · PostgreSQL · GitHub Actions
- Outcome
- First contract testing pair delivered, backed by 9 broker commits, 6 provider-service PRs, and 1 consumer-service PR.
- Tradeoff
- I used WIP and pending pact semantics so PR-stage contracts could be verified early without making adoption feel brittle for service teams.
Owning test evidence outside a vendor UI
- Problem
- Release evidence lived behind vendor UI paths and authenticated attachment endpoints, making audits, retrospectives, and long-term ownership painful.
- System Built
- I built a Node CLI that combines REST exports with Playwright session reuse for UI-only attachments, then paired it with a Next.js explorer over local or S3-backed archives.
- Stack
- Node.js · Playwright · AWS S3 · Next.js · React · shadcn/ui · GitHub Packages
- Outcome
- The toolchain made test cycles, executions, step results, and attachments portable, reviewable, and runnable through an internal npx package.
- Tradeoff
- I combined official APIs with browser-session attachment retrieval because the valuable evidence was split across supported and UI-only surfaces.
LLM output with engineering guardrails
- Problem
- Generic generated test cases were not enough. The output needed to match the org's SDLC, test pyramid, critical-user-journey model, and source traceability expectations.
- System Built
- I authored the organization-plugin layer, comparison docs, and integration notes that turn shared AI generation into output shaped by local engineering practice.
- Stack
- Python · OpenAI APIs · Cursor rules · Jira · Confluence · Figma MCP
- Outcome
- The work connects GenAI exploration to practical software-engineering quality: repeatable prompts, organization rules, traceable inputs, and reviewable generated cases.
- Tradeoff
- I kept the AI layer opinionated instead of generic, because quality work depends on context: SDLC shape, risk categories, and release expectations all change the right test plan.
Technical Depth
The range is QA-shaped, but the work is systems engineering.
Backend and Internal Systems
Automation Infrastructure
Quality Data and Release Gates
AI Engineering
Build the system, then prove it
The strongest work pairs architecture with evidence: screenshots, data, tests, runbooks, and production usage.
Make quality visible
Automation is more valuable when it becomes a signal that engineers, managers, and release owners can act on.
Use AI with guardrails
LLM tools should inherit engineering context: source traceability, validation paths, repo conventions, and honest failure modes.