Getting Started with AI·Lesson 8

AI Safety, Limitations & Best Practices

Understand what AI can't do, how to avoid common pitfalls, and responsible AI usage.

Course progress8 / 29

What AI Gets Wrong

Every AI model has significant limitations:

Hallucination — AI confidently generates false information. It might cite fake studies, invent statistics, or create plausible-sounding but incorrect explanations. Always verify important facts independently.

Knowledge Cutoff — Models don't know about events after their training data cutoff. They might provide outdated information about recent topics.

Math Errors — Despite improvements, LLMs still make arithmetic and logical errors. Never trust AI math without verification.

Reasoning Gaps — AI can fail at multi-step reasoning, spatial reasoning, and tasks requiring true understanding of cause and effect.

The Verification Habit

The most important habit in AI usage: verify before trusting.

- Facts and statistics: Cross-check with Perplexity or primary sources
- Code: Test it. Never deploy AI-generated code without testing

- Medical/legal/financial advice: Always consult a professional

- Citations: AI frequently invents fake references. Verify every citation

- Recent events: Use Perplexity or web search for anything time-sensitive

A good rule: Use AI to draft, brainstorm, and accelerate — but always apply human judgment to the final output.

Privacy and Data Considerations

Be thoughtful about what you share with AI:

- Don't share sensitive data — passwords, API keys, personal financial details, private health information
- Company confidential info — Check your organization's AI policy before sharing proprietary data

- Training data — Some providers use your conversations to train future models (you can usually opt out in settings)

- Data retention — Conversations may be stored and reviewed for safety. Don't share anything you wouldn't want seen

For sensitive work, use providers with strong privacy policies (Claude's data policy is generally considered the most privacy-friendly).

Practice This

Ask ChatGPT a factual question about a topic you know well. Identify any errors or hallucinations in the response. Then ask the same question to Perplexity and compare the cited answer.

Try this on ChatGPT, Claude, or Gemini

Key Takeaways
  • AI hallucination is real — always verify important outputs
  • Never trust AI-generated code without testing it
  • Don't share sensitive personal or company data with AI tools
  • Use AI to draft and accelerate, but apply human judgment to final outputs

Test Yourself

Q1What is the most important habit when using AI?
Verify before trusting. Always cross-check important facts, test code, and apply human judgment to AI outputs.