In 2023, artificial intelligence emerged as one of the biggest issues in the field of technology, driven by the rise of generative AI and applications like ChatGPT. Since OpenAI launched the ChatGPT application to the public in late 2022, awareness of this technology and its potential has spread, from discussions in parliaments around the world to its use in writing TV news segments. The public's interest in generative AI models has led many of the largest tech companies in the world to introduce their own chatbots or speak more openly about planning to implement AI in the future.
In just 12 months, conversations about AI shifted from concerns over how students might use it to complete their homework to hosting the first summit on AI safety for nations and tech companies to discuss how to prevent AI from surpassing humanity or even posing an existential threat. Throughout 2023, the rollout of AI-related products moved rapidly, with Google, Microsoft, and Amazon announcing generative AI products in the wake of ChatGPT's success. Google unveiled the Bard application, a leading new chatbot powered by data from its search engine. Amazon used its major product launch this year to discuss how it employs AI to make its virtual assistant Alexa sound more coherent and respond in a more human-like manner.
Microsoft began rolling out its new Copilot program, which aims to combine generative AI with a virtual assistant on Windows, allowing users to request help with any task they perform. Elon Musk announced the creation of xAI, a new company focused on working in the AI field, with its first product named Grok, which is a conversational AI available to paying subscribers at Company X. Governments and regulators could not ignore such widespread developments in this sector, as discussions around regulating the AI industry intensified throughout the year. However, no AI bill has been introduced yet, a delay that some experts criticized for warning about the risks of the proliferation and evolution of AI tools. In contrast, the European Union approved its own set of rules on AI oversight, although it is unlikely to become law before 2025, which will give regulators the ability to scrutinize AI models and provide details on how those models are trained.