Generative AI: It Is Forbidden To Ban!

Generative AI: It Is Forbidden To Ban!

Simply banning ChatGPT is not an effective generative AI policy. Of course, we must protect ourselves from potential abuses and risks, but this must be done as part of a strategy that will encourage the secure, governed use of generative AI tools.

Olivier Rafal, Consulting Director Strategy at WEnvision: “The CIO and its teams must be the first to identify, evaluate and test generative AI tools. 

Banning, blocking, banning: The press is full of examples of companies having banned ChatGPT. If Samsung, Apple, Deutsche Bank, Verizon, or Amazon did it, it’s the right decision, right? Answering this question is more complex. Of course, banning ChatGPT, at first, seems entirely appropriate: it is necessary to avoid data leaks. The example of Samsung is particularly striking: employees have entrusted the conversational AI with meeting notes to synthesize and source code to analyze… So much data has enriched the ChatGPT database and could therefore come out at other companies.

After shadow IT, shadow AI?

But this ban must be targeted, temporary, and accompanied by a set of strategic decisions and actions to encourage the effective and secure use of generative AI tools. Otherwise, it would be shooting yourself in the foot. First, companies and employees who use generative AI gain enormously in productivity and can offer innovative services more quickly. Then because we all know it: business users always find a way to get around the bans when they find something much more practical. In this case, if ChatGPT and other generative AI applications have exploded so quickly, these tools provide enormous help in everyday life.

After “shadow IT,” here comes the time of “shadow AI,” where employees will find a way to use generative AI from home, from their phone, or because a new application has not yet been blacklisted. As with the Internet, collaborative tools, and social networks, these protective measures must be quickly accompanied by supervision and encouragement.

The first lever to activate: acculturate employees

The first problem to tackle is the lack of knowledge of these tools in the sense of how they work. And in this, the mainstream media and other “ChatGPT vs. a philosopher or vs. a lawyer” type contests did not help. We must therefore begin by recalling a certain number of facts. In particular, AI ​​does not reason or understand what it produces. It just predicts the next word in a text, the next pixel in an image, and so on. based on their knowledge base. If she does not have the information, she will invent it by applying a mechanism of probability – we then say that she is hallucinating. This can be problematic for professional use and requires great attention from the user.

The other element of misunderstanding concerns the distinction between consumer tools, which feed on all the prompts, and professional tools, for which the supplier guarantees the tightness of the zones and the confidentiality of the data entered. In other words, it is a question of distinguishing between public generative AI (e.g., ChatGPT and Google Bard) and private generative AI (GPT and Google Palm).

Given the multiplicity of tools and associated usage rules, this distinction can be complex, even for IT departments. For example, while OpenAI will retrieve the data entered in ChatGPT to feed its model,

Beyond explaining to people why they no longer have the right to access certain tools, it is essential to acculturate them to this type of tool, to their use, to the misuse that can be had of them, and to the regulatory framework that surrounds them. Set up a governance, clear and equipped. Because at some point, users will have to be trusted, just like with access to the Internet, LinkedIn, etc., where it is just as possible to leak confidential information.

Second lever: Provide a technical environment

The second lever must be prepared even before activating the first. You have to think very quickly about the tools you want to provide to your employees. The list does not need to be complete from the start, but it is absolutely necessary to start this first iteration very quickly so as to be able to provide, from the acculturation phase, a technical environment offering secure access and use of a set of tools. First of all, this avoids the “shadow AI” effect and remains in a controlled environment. Then, this early choice will give a practical side to the acculturation phase, with concrete examples carried out using authorized tools and even getting to grips with these tools.

This does not necessarily require building and operating your own language model: providers such as Google, Microsoft, Amazon, or pure-players have quickly grasped the interest of offering compartmentalized instances for companies. It is better to devote one’s efforts to assembling such resources – for example, a vectorial database to introduce semantics within one’s documents, an orchestrator to chain calls to services, or even studios to create, optimize, and version the prompts used.

Also Read : Start-up Inflection AI Raises $1.3 Billion

Leave a Reply

Your email address will not be published. Required fields are marked *