Learner community
— Quality Log

What Participants Say

Feedback collected after cohort completion. Unedited except for minor formatting. We include the mixed reviews because that is more useful to you than a curated highlight reel.

← Back to Home

140+

Learners Completed

4.7/5

Average Rating

12

Cohorts Run

88%

Would Recommend

Participant Reviews

Recent Cohort Feedback

KH

Khairul Hisham

Backend Developer · Kuala Lumpur

The MLOps course addressed gaps I didn't know I had. I'd been running a model in production for about a year but the monitoring was ad hoc at best. The section on drift detection gave me a proper framework. The peer feedback at the end was unexpectedly useful — seeing how other engineers approached the same design problem was genuinely instructive.

MLOps in Practice · March 2025

NF

Nurul Farhana

Data Analyst · Petaling Jaya

I took the short Prompt course mainly because my team had started using an LLM tool and I wanted to use it less haphazardly. The third evening — applying the framework to actual work contexts — was the most useful part. I came in somewhat sceptical that three evenings could be meaningful and left with concrete things to try. One note: the second evening ran a bit long.

Prompt & Context Design · February 2025

WZ

Wong Zhi Hao

Software Engineer · Shah Alam

Software Fundamentals was exactly what I needed. I'm largely self-taught and was moving into a role with a bigger team. The git and code review exercises were structured in a way that made the abstract advice I'd read about actually click. Seven weeks felt about right — long enough to build habits rather than just awareness.

Software Fundamentals · January 2025

SA

Siti Aminah

ML Engineer · Cyberjaya

I had mixed feelings partway through the MLOps course — some of the pipeline tooling examples felt slightly dated. I mentioned this to Amirul and he acknowledged it and said it was being updated for the next cohort. That transparency was reassuring. The monitoring section and the final presentation were both strong, and the shared environments saved a lot of setup pain.

MLOps in Practice · February 2025

RI

Rashdan Iqbal

Product Manager · KL Sentral

I'm not an engineer but I work with engineering teams and we've been embedding LLM tools into several workflows. The Prompt course gave me a clearer sense of what my team is actually doing and — more importantly — helped me ask better questions when outputs don't look right. No prior ML knowledge was needed and that was accurate.

Prompt & Context Design · March 2025

YL

Yeoh Li Shan

Junior Developer · Bangsar South

The Software Fundamentals cohort I was in had twelve people and that number was right. The exercises were more useful than I expected — I'd read about code review practices but working through them with feedback from Suraya was different. The deployment section could have gone a bit deeper for my purposes, but I understand it was scoped to be accessible to the whole cohort.

Software Fundamentals · February 2025

Case Studies

Participant Journeys

Case Study 01

From Experimental Model to Monitored Production System

Challenge

A data engineer at a regional fintech company had trained a classification model that was working well in testing. Getting it into production and keeping it working reliably there was a different problem. Monitoring was a single alert that fired too late and too often to be useful. Rollouts required manual intervention every time.

What Changed

After completing MLOps in Practice, the engineer rebuilt the deployment process around a proper registry and implemented a staged rollout with a canary environment. The monitoring section gave her a framework for tracking the specific indicators that mattered for her model's purpose, rather than generic performance metrics.

Outcome

The team's deployment cycle went from an irregular manual process to a structured six-hour rollout. The monitoring setup caught a data drift issue within two weeks of going live — before it had any user-visible effect. The final course presentation was adapted into an internal documentation template for the team.

Case Study 02

Building the Git and Testing Habits That a Hiring Team Actually Expects

Challenge

A self-taught developer with two years of personal projects was preparing to move into a professional engineering team. His code worked but his git history was a single branch of save-point commits and he had no testing habits at all. He knew this would be a problem but wasn't sure how to address it systematically.

What Changed

The Software Fundamentals course gave him a structured approach to branching, pull requests, and review feedback. The testing exercises were designed around the kind of code he was likely to encounter — not abstract library examples. By week five he had submitted and revised a pull request in a shared repository with seven other course participants.

Outcome

Within two months of completing the course he joined a small product team. He mentioned in follow-up correspondence that the code review process from the course had prepared him for what the team expected — he was contributing to reviews meaningfully within his first month rather than waiting to feel ready.

Case Study 03

Understanding LLM Tools Well Enough to Use Them Professionally

Challenge

A communications manager at a professional services firm was using LLM tools for drafting and summarisation but felt uncomfortable with the variability of results. She had no framework for when to trust an output versus when to be sceptical, which meant she was either over-checking or under-checking depending on how the output looked.

What Changed

The Prompt and Context Design short course gave her a practical framework: what to include in a prompt, how context shapes outputs in ways that aren't obvious, and specific patterns to look for when an output is confidently wrong. The third evening's focus on applying these ideas to real work contexts was particularly applicable to her situation.

Outcome

She developed a simple internal review checklist for her team based on the evaluation framework from the course. The time spent re-checking AI-assisted drafts decreased because the process for checking was clearer — fewer outputs were used unchecked, but the ones that were used required less overall revision time.

Contact

Speak to Us Before Enrolling

We are happy to discuss fit. Ask about prerequisites, scheduling, or anything from the reviews above.

+60 3-2145 8726

[email protected]

Mon–Fri, 9:00 AM – 6:00 PM

Join the next cohort

Cohort sizes are kept small. When a cohort fills, we open a waitlist for the next run.

Get in Touch