Recent advances in large language models (LLMs) introduce unprecedented challenges for academic integrity in programming courses. This paper presents a cloud-based programming assessment system that creates ephemeral coding environments to preserve the authenticity of student work and deter AI-assisted plagiarism. Using Terraform and AWS, the system provisions individualized virtual machines for in-person assessment, mirroring the course environment without granting access to pre-existing code or external resources. Integrated with GitHub Classroom, the system handles assignment distribution, code submission, and resource clean-up. We discuss the design, cost analysis, and preliminary observations from implementation in a CS 2 course in C at Northeastern University (Vancouver). Preliminary results indicate that this controlled environment promotes student engagement and discourages reliance on AI for routine tasks. Future work will include studying how this approach impacts learning outcomes and AI usage patterns.