Articles

The Future of "ChatGPT" Concerns Humanity

The Future of

The "ChatGPT" robot project aims to make conversations using artificial intelligence appear smoother and more natural, similar to human exchanges. Users can pose questions to the robot, which will respond with complete sentences, attempting to mimic the rhythm of natural conversation with a real person. However, the question remains: Can artificial intelligence replace humans?

It is unlikely that the "ChatGPT" model will be able to fully replace humans in the near future, as it lacks the ability to understand the world or think logically. Additionally, it does not possess creativity or the capacity to generate new ideas. To confirm this point, an individual asked the AI: "Will you replace humans later?" The robot replied: "I am not capable of fully replacing humans. I can now perform some tasks that were previously done exclusively by humans, with greater accuracy and efficiency. However, there are still many tasks that require human skills and abilities such as empathy, creativity, and critical thinking that AI models like me have not yet been able to replicate."

#### "ChatGPT": Pros and Cons

"ChatGPT" is considered one of the most popular technology topics today, with mixed reactions ranging from simplistic positivity to extreme negativity. Technology and digital transformation expert Paul Semaan mentioned in an interview with "Wadna" that "the robot helps people in their work, especially in journalism (reports, articles, etc.), in improving foreign language skills, in interpreting and summarizing large texts, as well as translating several languages. It can answer various types of exercises, not being specialized in anything in particular."

He stated that "the robot can respond on any topic, whether it is medicine, space, databases, programming, business management, etc. It can also respond in Arabic, and the new version has evolved to answer in the language of social media sites." On the other hand, Semaan explained that "now there are scams that come via email, claiming that you have won a sum of money and asking you to call a specific number. However, these tricks have become outdated, and AI has improved them, making emails appear more specialized and professional, like those from global companies." He added that "there are sites and applications with open sources, which hackers can use to exploit 'ChatGPT' and discover security vulnerabilities within them under the guise of education."

#### Regulations to Control Robots

American businessman Elon Musk and a group of AI experts and industry executives have called for a six-month pause on the development of training systems for the recently launched "GPT-4" model from "OpenAI," citing potential risks to society and humanity in an open letter. This followed the inclusion of the European Union police "Europol" among those warning of ethical and legal concerns regarding advanced AI like "ChatGPT," indicating the potential for misuse of the system in phishing attempts, misinformation, and cybercrime.

On the issue of laws, digital transformation and information security expert Roland Abi Najm mentioned in an interview with "Wadna" that the main problem lies in who is proposing the laws, such as lawmakers who lack the necessary programming expertise. To gain such expertise, they need to undergo several training courses, which requires more time to issue the laws. He pointed out that "there are lawsuits in America related to Google and YouTube, and judges cannot rule because the topics are technical, and they lack the required technical awareness."

Abi Najm continued, discussing the application mechanism: "Each country should determine how to apply its laws and who wants to adhere to them. Most countries, especially in Europe, are calling for a six-month halt on the development of AI platforms to develop a legal regulatory mechanism for them." Italy has become the first Western country to ban the ChatGPT program, affirming that its data protection authority has launched an investigation against "OpenAI" due to privacy concerns regarding illegal data collection practices. However, it resumed its service after improving transparency and European user rights. Semaan noted that "China has banned AI because it is manufactured and developed by an American company, considering it could be biased towards Washington, so Beijing has worked on developing its own search engine called 'Baidu.'"

#### "ChatGPT" in Education

Semaan, who is also a university professor, pointed out that "artificial intelligence helps professors and students obtain answers, but conversely, it presents challenges for professors to discern whether the source of the exercise is from the student or the robot. In this case, we resort to oral examinations in class to verify." He urged other teachers to develop themselves in accordance with technology to understand how to interact with the robot, making it easier for them to identify the sources of their students’ homework. He noted that "there are several schools and universities that have banned 'ChatGPT,' and unfortunately, this is considered a misguided decision, as we now live in a technological era."

From another perspective, parents have a crucial role in safeguarding their children from becoming overly reliant on artificial intelligence. Semaan believes that "the robot can extract personal information from children, and to avoid this, parents must monitor their children closely as they engage with technology without caution."

#### Paid Version of "ChatGPT"

"OpenAI" has introduced a paid version of its famous "ChatGPT," a trial subscription for its chatbot program, offering certain advantages for $20 a month. The launch of "ChatGPT PLUS," as some call it, is aimed at those seeking additional features, such as priority access during peak hours and faster response times.

#### The "Godfather of AI" Leaves Google

Geoffrey Hinton, who, along with two others, won the 2018 "Turing Award" for their foundational work leading to the current surge in artificial intelligence, announced that part of him regrets his life's work. Hinton recently resigned from his job at Google to freely discuss the dangers of artificial intelligence, according to a recent interview with the 75-year-old in the "New York Times." Hinton, who worked at Google for over a decade, remarked: "I console myself with the ordinary excuse: If I had not done it, someone else would have. It’s hard to see how you can prevent bad actors from using this technology for malicious purposes."

Hinton joined Google after the company acquired a startup he co-founded with two of his students, one of whom became a chief scientist at "OpenAI." He developed with his students a neural network that learned to recognize common objects like dogs, cats, and flowers after analyzing thousands of images. This work ultimately led to the creation of the "ChatGPT" model and "Google Bard." In the interview, Hinton expressed his satisfaction with Google overseeing the technology until Microsoft launched its new "Bing" program integrated with "OpenAI's" model, which posed a challenge to Google's core business. He tweeted to comment on the interview, stating that he "left Google to be able to speak about the dangers of AI without considering how this would impact Google," adding that "Google has acted very responsibly."

Our readers are reading too