Person on a laptop with security icons

No matter your role here at UCLA Health, AI will increasingly become a large part of both your work and home life. Sometime its presence is clear, like when you ask Copilot Chat a question, receive action items from Zoom AI, or get AI summarized search results from Google. Other times it’s less obvious, generating the content you read and watch.

AI offers great benefits but also real risks. It can reflect biases, make mistakes, and be exploited by bad actors. Learning how it works and how to use it responsibly is essential. To help you use AI safely, we’ve highlighted tips and resources below.

Tips for using AI tools

Because of the nature of our work, we must use extra precautions when using AI with UCLA Health data. To protect ourselves:

  • Assume everything you enter can be stored, shared, and/or leaked.
  • Avoid uploading sensitive information.
    • Learn more about different ways data is classified by watching the UCLA Health: Data Classification video.
  • Think before pasting large chunks of text to avoid including small identifiable information.
  • Check for biases and mistakes
    • Zoom AI: review the generated Meeting Summary before sharing with others.
  • Be mindful when crafting prompts to avoid leading questions
  • Be wary of AI hallucinations
    • AI can sometimes produce misleading results, often referred to as “hallucinations”. Hallucinations are when a chatbot produces responses with fabricated data that appears authentic. Read more on AI hallucinations →

More information

For more information on prompting AI, review our AI: Guide to Prompting article →