Artificial intelligence (AI) has certainly been hitting the headlines in 2023. We’ve had warnings of its potential to bring about the extinction of humanity, claims that it poses a national security threat, calls for all training of AI above a certain capacity to be halted for at least six months and a resignation by the ‘Godfather’ of AI. Generative AI (GenAI) models, such as ChatGPT, seem to be some of the most discussed, with much debate around their potential to transform our everyday lives. But what about in an enterprise environment? How can businesses harness the potential power of this truly transformative technology and to what end?
Although the field of GenAI is still pretty nascent, we are definitely at an inflection point in AI and computing in general. Most of the large language models making a splash in the generative AI space are good at natural language processing (NLP). Across a multitude of industries, these GenAI models can help with NLP-based applications, such as providing interactive help. You can expose your knowledge base/end-user manuals and documentation through a GenAI-based interactive chatbot, which will make finding information vastly easier for users.
Another immediate benefit, although a considerably bigger challenge, is to provide an NLP-based enterprise-wide search capability on business data. This is of course an ever-evolving space, with enterprise software businesses already hard at work investigating how GenAI models can complement existing NLP solutions and AI offerings. This could be by enhancing contextual experiences, integrating voice chat capabilities with digital assistants or machine learning (ML) models through AI platforms, and extending enterprise search into image recognition capabilities.
And, because GenAI models enable users to tap into a variety of data sources to generate text and code, formulate predictions and summaries, perform translations, analyse images and more, they can be used for a variety of enterprise use cases. These include writing emails, reports, product documentation and web content; creating job descriptions and requisitions; performing product and vendor comparisons, and assembling photos, music tracks and videos for marketing campaigns. And you can also put the NLP skills of GenAI models to good use to summarise books, review and proofread any content, and provide ideas to jumpstart an initiative.
GenAI in action
So, what does this look like in practice? Well, for example, companies with IT and software engineering departments can initiate a healthy practice of leveraging tools such as Microsoft’s Copilot or AWS CodeWhisperer for code generation. For businesses that need to build their own industry-specific language models, simply verify general information, get reviews and recommendations by sourcing the web, or have a need to combine their private enterprise data and enrich this with information in the public domain, they can integrate with GenAI tools and platforms such as Open AI’s ChatGPT or AWS Bedrock.
Challenges ahead
The pace of change in the world of GenAI is quick and organisations that don’t respond in time may be left behind. Ideally, businesses should be embracing this powerful technology rather than rejecting it. But that definitely doesn’t mean that one size fits all when it comes to GenAI models and there are certainly a number of challenges to be addressed before GenAI models can gain widespread adoption in enterprise environments.
First, there’s the issue of reliability. While the generated content from a large language model looks original, it is in fact mimicking a pattern based on a similar training dataset it has been exposed to. Many times, the generated information is known to be false. And the same question can generate different answers.
Secondly, we have privacy issues. The data and the input conditions that the users share are used to train the larger model. So, valuable trade secrets or personally identifiable information (PII) data can be shared, inadvertently leading to compliance violations. In addition, the generation and exchange of business-specific content must adhere to strict legal and data privacy requirements — for example, when companies perform a Data Protection Impact Assessment (DPIA) they must ensure compliance with the General Data Protection Regulation (GDPR). Most of the GenAI platform vendors do offer the possibility of keeping your enterprise data exclusive and not used for general training purposes, but it’s important that businesses who plan to use GenAI take this into account.
Then there’s the issue of bias. Content generated by AI is tailor made based on the input prompt. You can also train the model using favourable data points only without exposing it to the full picture. Ultimately, you can mould the output the way you want — both useful and harmful. The tone of generated content could be authoritative while in fact it could be a subjective view and it would be easy to manipulate a gullible user and influence their views pretty convincingly with GenAI. Also, the risk of generating fake news, video and audio clips will only increase.
Moderation filters
That’s not to say that these challenges are insurmountable. One way to combat these threats is to apply the proper moderation filters on the end user interface through which GenAI tools can be used by ‘normal’ users. And, without a doubt, for business use, enterprises must follow a ‘human in the middle’ approach, ie, all generated content must be moderated by a real person before being rolled out for regular consumption. Human control and moderation will be required for some time to boost the accuracy and consistency of the generated content, help reduce socio-political biases and ensure that a company’s competitive edge is not compromised.
Considering all of the above, enterprises need to develop a point of view of how GenAI applies to them. Additionally, it will be vital to follow the best practices from GenAI vendors — for example, the use of moderation filters from Open AI. What we are also seeing is individual countries scrambling to come up with their own AI policies, something else that businesses will need to take into account, making sure the local AI policy is adhered to, following the proper protocols as outlined by respective governments.
Rapid evolution
In terms of how generative AI will evolve over the next 5–10 years, investments in the technology will increase tremendously — both in terms of generating better models as well as in the hardware space, with faster more powerful chips and the need for more network bandwidths. Its impact should definitely not be underestimated. All media content we will consume in the coming years will be influenced by GenAI; the internet search as we know it will move more towards a tailored, conversational experience; tools that detect content generated by AI will get smarter, and regulatory and compliance will get ever tighter.
ChatGPT and other GenAI models represent disruptive solutions that already are helping consumers refine the search process, automate the creation of content and boost individual productivity. While we expect enterprises to adopt this powerful technology rapidly, we also hope they are aware of the potential risks, inaccuracy and privacy concerns involved too. Naturally, it’s only a matter of time before the GenAI space matures and addresses such concerns. In the meantime, with human control and moderation, GenAI models have the potential to revolutionise enterprise environments.
By Terry Smagh, Senior Vice President & General Manager, APJ, Infor
This article was first published by Technology Decisions