The use of generative artificial intelligence (AI) in clinical trial recruitment is rapidly changing how participants are identified, engaged, and enrolled. While the technology promises efficiency and broader reach, it also raises urgent ethical concerns. First, algorithmic bias in recruitment systems may reproduce or deepen health disparities, threatening efforts to build representative and inclusive clinical studies. Second, reliance on AI-generated patient materials complicates informed consent and transparency, since tailored outputs can obscure critical risks, oversimplify complex medical information, or frame participation in ways that undermine trust. Third, the integration of AI into recruitment workflows necessitates enhanced frameworks for accountability and responsible governance. Unlike traditional recruitment processes, generative AI involves layers of opacity, raising difficult questions about responsibility for errors, bias, and participant harm. Without strong regulatory oversight, the rapid adoption of AI-driven tools risks undermining participant protections and public trust in clinical research.