The advent of Language Models – LLM, marks an evolution in our integration of AI, and raises questions about the future of employment and the relationship between man and machine.
If certain fears are legitimate, they should not obscure our vision of the future: in this sense, continuing to position ourselves in competition with the machine – seeing to act like it, only contributes to further fueling our fears at the whim of the increasingly clear evidence that such a competition is lost. Rather than giving in to distrust, everyone must seize the opportunity that AI offers us to refocus on what makes up the essence of our humanity. Whether in its design or its use, understanding the functioning of these models and their differences from the human brain, and initiating the repositioning of our expertise, opens up brilliant prospects.
A “human” training If ChatGPT surprises with its ability to produce responses of a naturalness reminiscent of that of men – going so far as to catalyze an ever more marked anthropomorphism, it is worth remembering the importance given to human expertise in its training, unlike many other models (eg, BERT). In this sense, in addition to language hiding, which consists of hiding words in various sentences and giving the model the task of discovering the hidden words, ChatGPT training requires humans to evaluate the results according to various criteria such as the relevance of responses, ethics and respect for human values. Once this first step is completed, reinforcement learning is used to improve the performance of the model: here, the principle is then to give rewards to the model, positive or negative, depending on its actions. It is through the integration of these rewards that the model learns the rules and the correct response strategies. In the case of ChatGPT, the more the answers generated are compatible with those given by the reward model, which has learned human preferences, the more the model is rewarded. Also, this design process demonstrates the importance of human expertise in training, to ensure performance and ethics. Models that don’t include these human preferences in their training thus continue to struggle to perform as well as a person. For example, recent research conducted by Meta shows that: (1) thanks to the language mask, LLMs are able to construct a representation of words by considering the close context, as the brain can do, (2) however, the brain is able to enrich this first layer of representation, by considering a temporal context and a larger hierarchy, in order to build a richer understanding of the text. LLMs that do not include reinforcement steps based on human preferences are thus unable to achieve this sophisticated understanding.
Furthermore, LLMs are stochastic parrots which are based on probabilities, and have no capacity for planning, awareness, or updating information. For example, the information that reaches us (eg, the result of a match) is updated immediately by our brain, in order to optimize our ability to predict future events. Thus, when we are surprised by information, the hippocampus – structure of the brain associated with memory, understands that it is time to restructure the information, and moves from a mode of preservation to a mode of updating. LLMs do not have this plasticity: composed of billions of parameters, it is impossible to know which parameter to update to update the information, and a complete re-training would be costly. Part of the research on LLMs is thus devoted to going beyond these limits, in order to get as close as possible to the differentiating capacities of the brain.
A renewal of our expertise
Used in an enlightened way and complementary to our own qualities, generative AI and LLMs can significantly enhance our abilities. Research published by MIT researchers, which studies the impact of ChatGPT on the performance of skilled workers on writing tasks, demonstrates that the use of ChatGPT makes it possible to: (1) complete the task more quickly(2) create content that is deemed to be best quality in terms of writing, content and originality, and (3) to improve the satisfaction workers to complete the task. Also, if the tool allowed high-capacity workers to go faster, it especially allowed others, initially with lower capacities, to increase the quality of their production, to the point of reducing the performance gap between the worst and the best. Other research has also been able to highlight the ability of AI to improve individual decisions, or the ability of humans and AI to mutually enrich each other. These findings call for promoting a distributed cognition that pleads the need for human expertise in a world augmented by AI. In this sense, if AI is a major technological breakthrough, it above all represents a human revolution, and requires an essential change in our metacognition, our humility, and our relationship with the world. Ultimately, technologies do not change societies – it is their reappropriation by humans that makes them evolve.
To achieve this, it is necessary to understand our own intellectual limits and our complementarities with AI, to affirm our curiosity rather than our pride, or even to learn to ask the right questions. Everyone, in their emotional journey, must thus develop their critical thinking to be able to understand the potential biases of an AI, but also their own human biases. The question is not whether AI is perfect, but rather whether it can do better than the human status quo. Intellectual emancipation then becomes an essential lever for forging an enlightened understanding of the potential of AI, but also of its gray areas. In this sense, AI opens up extraordinary possibilities of access to knowledge to all, and aims to be a learning lever exceptional. It is up to us to transform these assets into concrete and beneficial actions: this is our share of humanity and expertise. The real threat is therefore not the AI itself, but that we are trying to transform ourselves into an automaton: to maintain an advantage over the machines is above all not to act like it. It is therefore the moment, for us as humans, to really understand, and redraw the real place of our expertise, work and our humanity.
An op-ed written by: Emeric Kubiak, Head of Science @AssessFirst
<<< Also read: Is Chat GPT impacting higher education? >>>