Sam Altman, the head of OpenAI, the organization responsible for the widely popular ChatGPT bot, has addressed concerns about the impact of artificial intelligence (AI) on the job market.
He addressed these concerns while engaging in a global tour to engage national leaders and influential figures. He reassured that AI would not eliminate entire sectors of the workforce through automation, contrary to some predictions.
Sam Altman, a prominent figure in Silicon Valley at the age of 38, has been warmly received by leaders around the world, including cities like Lagos and London.
However, earlier this week, Altman’s remarks seemed to have irked the European Union when he suggested that OpenAI might consider leaving the bloc if regulations became too stringent.
In response, Altman clarified his position to a group of journalists during the Paris event, stating that the headlines were unfair. He emphasized that OpenAI had no intention of departing from the bloc; instead, the company was more likely to establish an office in Europe in the future.
Read also: Sam Altman, CEO of OpenAI, ChatGPT visits Lagos, Nigeria
AI to progress jobs rather than threat
During his tour, Altman emphasized his belief that the notion of AI progressing to the extent that humans are left without any work or purpose does not resonate with him.
He expressed confidence that AI technology, like ChatGPT, should be seen as a tool to assist journalists rather than a threat to their profession.
Altman compared it to providing journalists with a team of 100 assistants who can aid them in research and idea generation.
ChatGPT gained significant attention last year for its ability to generate essays, poems, and conversations based on minimal prompts. Following its success, Microsoft invested billions of dollars in supporting OpenAI and now integrates the company’s technology into various products.
This development triggered a competitive race with Google, which has also made several similar announcements in the field of AI.
The success of ChatGPT
The success of ChatGPT, which has been utilized by politicians for speechwriting and has demonstrated its ability to pass challenging exams, has propelled Altman into the global spotlight.
Reflecting on this newfound attention, Altman expressed that while it feels special, it is also quite exhausting, and he hopes for a calmer life in the future.
How it began
OpenAI was established in 2015 with investors including Altman and Elon Musk, the billionaire owner of Twitter. Musk departed from the company in 2018 and has recently criticized it on multiple occasions.
Musk, who has his own aspirations in the field of AI, claimed that he came up with the name OpenAI and invested $100 million in the company. He felt betrayed when OpenAI transitioned from being a non-profit organization to a profit-oriented one in 2018 and has asserted that Microsoft effectively runs the company now.
In response, Altman stated that he disagreed with Musk’s assertions but aimed to avoid engaging in a dispute.
He emphasized the importance of focusing on OpenAI’s mission, which is to maximize the societal benefits of AI, particularly Artificial General Intelligence (AGI).
AGI represents a future where machines can excel in a wide range of tasks, not limited to a single domain.
Altman acknowledged that definitions of AGI are somewhat ambiguous, lacking consensus. However, he shared his personal definition, considering AGI as the point where machines can achieve significant scientific breakthroughs, such as unravelling the fundamental theories of physics.
Criticisms of AI
A significant critique of OpenAI’s products is their lack of transparency regarding the sources used to train their models.
Critics argue that, in addition to potential copyright concerns, users should be aware of the origin of the information provided and whether it includes content from offensive or racist web pages.
However, Altman contended that the key concern should be assessing whether the models themselves exhibit racial bias. He emphasized that the crucial factor lies in how the models perform on racial bias tests rather than the disclosure of specific sources used during training.