Why I'm Finally Building in Public

For years, I’ve built tools that stayed locked in production silos. Internal pipelines. Clinical systems. Things that worked, but that nobody outside my team would ever see.

That changes this year.

The Problem with Building in Private

When you work in clinical AI, there’s a natural tendency toward secrecy. Patient data is sensitive. Institutional knowledge feels proprietary. And honestly, it’s easier to ship fast when you’re not thinking about documentation.

But I’ve started to feel the cost of this approach. Every time I solve a problem, I solve it alone. Every time someone else hits the same wall, they start from scratch. The wheel gets reinvented constantly.

What I’m Sharing

This site is my commitment to building in public. Here’s what you’ll find:

Projects — Production systems I’ve built at Mount Sinai, including multi-agent architectures, GPU-accelerated pipelines, and clinical RAG systems. Where possible, I’ll share code, architectures, and lessons learned.

Publications — My papers on genomic curation, clinical decision support, and AI-generated text detection. All with links to preprints and code.

Blog — Deep dives into problems I’m solving. Not polished tutorials—more like field notes from someone figuring things out in real time.

A Glimpse at What I Build

Here’s an example of the kind of systems I work on—a multi-agent architecture for genomic evidence extraction:

flowchart LR
    A[Literature] --> B[Extraction Agent]
    B --> C[Validation Agent]
    C --> D[Knowledge Graph]
    D --> E[Clinical Query]
    E --> F[Evidence Report]

This is a simplified view of OncoCITE, a system that automatically extracts genomic evidence from scientific papers. I’ll be writing more about the architecture decisions and lessons learned.

What I’m Learning

I’m also using this year to go deeper on model interpretability. I’ve spent years making models work. Now I want to understand why they work—and more importantly, when they don’t.

I’m currently participating in SPAR and other AI safety programs. Expect posts on mechanistic interpretability, feature visualization, and what happens when you actually look inside the black box.

Let’s Connect

If you’re working on multi-agent systems, clinical AI, or interpretability—I’d love to hear from you. The best ideas come from unexpected conversations.

You can reach me at quidwaiali@gmail.com or connect on LinkedIn.

Let’s build something.




    Enjoy Reading This Article?

    Here are some more articles you might like to read next:

  • MedGemma 1.5: Google's Open Medical AI Just Got Serious