AI is an exciting field but beware the technology is far from perfect!

The Guardian reported last week on the fact that Google Gemini had been forced to take down its human image capability due to inappropriate depictions of historical characters ethnicities.


Generative artificial intelligence (AI) tools have indeed made significant strides, but they’ve also encountered their fair share of blunders. Here are some notable instances where generative AI has stumbled:

  1. Misinformation and Fake News:
    • Algorithms powering AI-generated content have inadvertently contributed to the spread of misinformation, disinformation, and fake news. Tools like OpenAI’s ChatGPT and Google’s Gemini have made it easier to produce content, but discerning factual information has become increasingly challenging.
    • A German study observed a trend toward “simplified, repetitive, and potentially AI-generated content” on search engines like Google, Bing, and DuckDuckGo.
    • NewsGuard identified 725 unreliable websites that publish AI-generated news with minimal human oversight.

  1. Government Intervention:
    • Australia has grappled with the display and moderation of news and content on online platforms. Amendments to the criminal code and the implementation of a bargaining code demonstrate partial successes but also highlight the scale of the problem.

  1. Generative AI Blunders in 2023:
    • In the past year, there have been several high-profile generative AI blunders. While some of these may not have directly caused harm, they underscore the challenges faced by AI systems.
    • These blunders serve as cautionary tales for developers and researchers, emphasizing the need for rigorous testing and responsible deployment.

  1. Flaws in Current Generative AI Models:
    • Generative AI models are not infallible. Some common flaws include:
      • Focusing Too Much on Technology: Companies sometimes prioritize the technology itself over the value it delivers to users.
      • Mystifying Language Models (LLMs): LLMs can generate plausible-sounding but nonsensical text.
      • Rushing to Market and Announcing Vaporware: Hasty deployment without thorough testing can lead to embarrassing mistakes.
      • Underinvesting in Design: Neglecting user experience and design can result in suboptimal outcomes.
      • Mistaking Novelty for Necessity: Not all novel AI applications are necessary or useful.

In summary, while generative AI holds immense promise, it’s crucial to tread carefully, address its limitations, and ensure responsible development and deployment. Other useful articles on this topic include:

  1. theconversation.com
  2. worklife.news
  3. pcmag.com
  4. maginative.com

Greg Black
Chief Executive Officer
With help from Jeremy Norton and Microsoft Copilot
1st March 2024