At a glance #
- Client: Moose Jaw Ophthalmology (Dr. Matt Regan), Moose Jaw, Saskatchewan
- Scope: Full Zeiss FORUM environment migrated from on-prem to AWS
- Archive migrated: 5.36 TB, 2.5 million DICOM files, studies from 1996 to present
- Production since: February 27, 2026
- Cutover: Performed during a single after-hours window, with zero data loss
- Region: AWS
ca-central-1(Montreal) — all patient data stays in Canada
The situation before #
Moose Jaw Ophthalmology was running FORUM on a physical server in the clinic. The archive had grown to roughly five terabytes across three decades of patient studies, and the server’s warranty was running out. The practice wanted to:
- Get off the hardware-replacement cycle
- Let Dr. Regan review studies from outside the clinic
- Protect the archive from site-specific risks (drive failure, fire, flood)
- Keep patient imaging data inside Canada
Moving to a full cloud-hosted FORUM made sense — but the migration needed to preserve the workflow their Zeiss and non-Zeiss devices already used, and it could not interrupt clinic operations.
The cost question #
Dr. Regan is technically hands-on, and he’d already run the AWS pricing calculator himself before we spoke. His conclusion at the time was that moving FORUM to AWS would be more expensive than keeping the on-prem server. He wasn’t wrong — on the naive numbers.
What changed the math was matching the architecture to how the clinic actually uses the server. The AWS pricing calculator defaults to 24/7-running, standard-tier storage. For a clinic that’s open five days a week, during daytime hours, and whose imaging archive grows by a few hundred gigabytes a year of which most is rarely-accessed historical data, that default is exactly wrong. Scheduling the server to clinic hours cuts compute roughly in half. Tiering the archive into cheaper storage classes cuts storage by a large multiple. Taken together with the elimination of the on-prem hardware refresh cycle, the monthly cost lands in a range that’s workable for a small-to-medium practice.
This cost-shaping is most of our value on the ongoing side. The migration is a one-time project; making the bill make sense month after month is the work.
What we built #
A single-tenant AWS environment, owned entirely by the practice:
- FORUM server on Amazon EC2 (Windows Server 2022), sized to the clinic’s real usage and right-sized after a monitoring period.
- Amazon S3 archive, holding the full DICOM archive with automatic tiering — newer studies on hot storage, older studies moving to cold storage without manual intervention.
- AWS Storage Gateway presenting the S3 archive as a network drive
that FORUM sees as
F:\— FORUM works the same as it did on-prem. - Site-to-site IPsec VPN via the clinic’s existing Meraki firewall — Zeiss devices connect to the cloud FORUM the same way they connected to the on-prem one.
- WireGuard VPN for remote access, so the doctors can review studies from home or while traveling. Split-tunnel routing means only clinic-related traffic goes through the VPN.
- AWS Instance Scheduler shutting the server down outside clinic hours (Mon–Fri 7am–6pm Saskatchewan time), cutting compute cost by roughly half with no impact on clinic workflow.
- CloudTrail audit logging with seven-year retention, capturing every read and write on the imaging archive — aligned with HIPAA-grade audit discipline, applied to Canadian privacy obligations.
- CloudWatch monitoring covering server health, network, and storage, with a central dashboard.
- Start/stop control panel accessible to the doctors via a secure URL, for cases where they need the server on outside scheduled hours.
Collaboration with Zeiss #
FORUM installation, configuration, and the cutover itself were handled in close collaboration with Zeiss’s support engineers. CloudKeep built and manages the AWS environment around FORUM; Zeiss runs FORUM itself. The result is a clean division of responsibility that keeps both sides in their area of expertise.
The cutover #
The migration used FORUM’s own forwarding feature to make the cutover low-risk:
- For a staging period, the AWS FORUM server was configured to forward new studies back to the on-prem server. This meant the cloud server could be built, tested, and loaded with historical archive data while the clinic continued to operate normally.
- On cutover evening, the last delta of that day’s studies was synced.
- Zeiss devices and workstations were repointed to the AWS FORUM server’s IP address.
- End-to-end tests confirmed normal operation.
The cutover was performed during the planned after-hours window and completed without a rollback being needed — though rollback would have been a matter of changing IP addresses back on the devices, thanks to FORUM’s forwarding.
The historical archive — approximately 5 TB across 2.5 million files going back to 1996 — was migrated in staged nightly uploads over several weeks, running outside clinic hours to avoid competing for bandwidth.
Outcomes #
- No more on-prem server to maintain. The old clinic server is being decommissioned.
- No more clinic-stopping hardware events. AWS absorbs the hardware-failure risk that previously sat in the back room; software issues are diagnosed and remediated remotely, without waiting for on-site IT.
- Remote study review for both doctors, over a secure VPN.
- Full archive retained — nothing was dropped, including decades-old studies.
- Canadian data residency — all infrastructure in Montreal, and the audit trail demonstrates it.
- Cost discipline built in — server scheduled to clinic hours; storage tiered automatically; right-sizing after a monitoring period cut compute cost a further ~50% without user-visible impact.
- Zero clinical disruption during the cutover.
In Dr. Regan’s words #
My 7-year-old FORUM server was causing me and our clinic increasingly frequent headaches — hard drives running out of space, power source failures, corrupt boot files, spontaneous restarts — all of which created varying degrees of clinic disruption. I started to look at replacing the server, but knew I would be in this same scenario in a few years given the typical lifecycle of computer hardware.
I investigated cloud-based options, but was quickly turned off with results of the pricing calculator. Before giving up completely, I was put in touch with Shinichi who had experience migrating other clinics to AWS. After the first call, I was reassured by his knowledge base in this area as well as his reassurance that if things didn’t go well, we can always revert back to the on-site server. There was essentially zero risk to test it.
Shinichi’s plan was clearly laid out to make sure everything was ready to go prior to cutover. The cutover evening went very smoothly and we were fully functional the next day. Beyond how smoothly the process was, I am extremely pleased with the uptime reliability of our FORUM AWS server and my FORUM server headaches are now gone. The flexibility has been an incredible bonus; Shinichi has been optimizing the cost-to-compute ratio to deliver the speed we need at the lowest monthly cost. Further, we can also upgrade the virtual hardware if a future FORUM Application/Workplace ever requires additional compute which makes this future-proof.
— Dr. Matt Regan, Moose Jaw Ophthalmology
What this means for your practice #
Every practice is different — the number of Zeiss and non-Zeiss devices, archive size, clinic hours, and remote-access needs all change the specifics. But the underlying pattern works: Zeiss runs FORUM, CloudKeep runs everything around it, and your practice ends up with a cloud-hosted imaging environment that your doctors, staff, and Zeiss devices all see as “same as before, just more reliable.”
Considering the same move for your practice? Start with a free 30-minute assessment.
Book a free assessment