

The nature of work has fundamentally changed due to Artificial Intelligence (AI) tools, which have drastically increased our speed and productivity in research, writing, and analysis. However, this advancement introduces a critical risk: AI Prompt Leakage.
Most public AI tools operate in the cloud and store the text you input. When URSB staff use these platforms to draft communications, summarize documents, or troubleshoot, they may inadvertently expose confidential URSB information to third-party providers. This risk is amplified because AI tools actively learn and store clues from every interaction, sometimes creating surprisingly personal connections based on past inputs.
To use AI responsibly, URSB employees must adopt an essential rule: If the information is not already public, think twice before sharing it
For URSB, which handles highly sensitive national data, including business records, patents, citizen identity, and internal planning, accidental exposure has serious implications. Sharing this information with public AI platforms means it could be stored outside Uganda, used for model training, accessed by external engineers, or exposed in a data breach. This can damage public trust and impact legal and intellectual property processes.


To use AI responsibly, URSB employees must adopt an essential rule: If the information is not already public, think twice before sharing it. Best practices include avoiding the input of names, ID numbers, legal documents, procurement papers, or uncensored screenshots. Instead, use generic, anonymized data, or replace sensitive details with placeholders like [Business Name]. While URSB explores secure, enterprise-grade AI solutions, prudent caution is vital. AI is powerful and transformative, but it is not private.
Article by,
Emanuel Okello,
Senior Software Developer,
Department of ICT & Innovation
