The following document was a brief I wrote in May of 2023. The world was abuzz with excitement about OpenAI’s new LLM, ChatGPT (3.0). I had been watching the space for some time and had begun to experiment with the tool a bit, leaning into my education around prompt engineering, and reading about the underlying technology. At my office, a small few were discussing the potential for these tools. At a large meeting, one of our Leadership Team overheard me discussing ChatGPT with my manager. They said the words “What’s that?” I was floored. I knew they were busy, but this already seemed like such a huge source of potential advancement, and perhaps more importantly, a huge source of risk. I told the member of Leadership that I would be happy to write a brief on the subject for them, as it was likely something of which they should be aware. What was delivered to them was a one-page executive summary of the following document. However, the full brief circulated in other company circles and resulted in quite a few noses getting put out of joint. Perhaps I shall write about that later. For now, this is part of my body of work. Enjoy. Note: this is frighteningly low-level and was already wildly out of date by the time I wrote it, because such is the nature of all information around AI.
Download the PDF version here:
LINK
__________________________________________________________________________________________________________
EXECUTIVE SUMMARY
Various AI/ML/LLM tools and technologies have recently made headlines in every publication across all industries. Artificial intelligence (AI), Machine Learning (ML), Large language models (LLMs) using natural language processing (NLP) and natural language generation (NLG) tasks are everywhere.
OpenAI is an AI research and deployment company. ChatGPT is an AI chatbot that was initially built on a family of large language models (LLMs) collectively known as GPT-3. These models can understand and generate human-like answers to text prompts, because they’ve been trained on huge amounts of data.
AI and Machine Learning can help oil and gas companies optimize operations, reduce costs, and improve safety and environmental sustainability. These tools can be used by businesses to streamline a large number of tasks and have the potential to create more efficiency in almost every area of business operations. Legal departments are also finding many ways AI/ML can be useful within their practices.
In fact, COMPANY is already using many programs and applications that use this technology in some fashion.
There are many serious security and privacy concerns around the use of these tools. Not only will email and phone scams and phishing attempts increase, they will become more personal, specific, and better targeted. The following vocal clone of our CEO was created in under five minutes using free tools and freely available audio from the internet: LINK voice clone 2.wav
(for security’s sake, this link is internal only. The content of the cloned voicemail was the following: “Hey there, [name], we have a problem. Can you meet me at the office this Saturday to discuss our AI risk management framework? Uh, please bring your dog, a purple elephant and uh, million dollars cash. See you then.” I made the content nonsensical intentionally, for security purposes, but it was exceedingly convincing for something I threw together in literally five minutes with a free tool and a few seconds of audio grabbed from YouTube.)
A significant increase of Cyber-attacks have already been identified and documented by Cyber Security experts since the release of these easily accessible tools. The accidental or even malicious uploading of privileged and proprietary data from within COMPANY is a major concern for both Cybersecurity and the Law organization.
Other considerations and risks include:
1. The impending wave of companies using these models to create businesses capitalizing on the need for data driven climate-based actions.
2. A talent and culture revolution as more and more of the global talent pool begins to seek employment with companies using and integrating this technology into all aspects of their daily work lives.
3. Intellectual Property, Copyright and Trademark concerns.
While a careful consideration of these tools and technologies is fully warranted, a measured response should be used when deploying sweeping, company-wide blocks or suspensions.
Introduction:
Various AI/ML/LLM tools and technologies have recently made headlines in every publication across all industries. This brief aims to give an overview of the state of AI/ML/LLM as it pertains to COMPANY, Cybersecurity, Legal department considerations, and the O&G industry as a whole.1 Links are provided in line for the reader’s convenience. Some links are currently hosted in the AI Team site.2
What is AI?
Artificial intelligence (AI) is an umbrella term for different strategies and techniques you can use to make machines more humanlike. AI includes everything from smart assistants like Alexa to robotic vacuum cleaners and self-driving cars.
What is ML?
Machine learning (ML) is one among many other branches of AI. ML is the science of developing algorithms and statistical models that computer systems use to perform complex tasks without explicit instructions. The systems rely on patterns and inference instead. Computer systems use ML algorithms to process large quantities of historical data and identify data patterns. While machine learning is AI, not all AI activities are machine learning.3
What is LLM?
Large language models (LLMs) are foundation models that utilize deep learning in natural language processing (NLP) and natural language generation (NLG) tasks. For the purpose of helping them learn the complexity and linkages of language, large language models are pre-trained on a vast amount of data.4
What is OpenAI?
OpenAI is an AI research and deployment company. It began as a multi-party nonprofit initiative that included Amazon Web Services, Infosys, YC Research, Elon Musk, Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, and Peter Thiel. In 2023 it entered into a partnership with Microsoft that includes a 10 billion dollar investment, believed to be part of MS efforts to integrate ChatGPT into its Bing search engine. Price Waterhouse Coopers has recently announced a 1billion dollar investment in OpenAI and plans to use GPT-4 and the Microsoft Azure OpenAI service.5
What is ChatGPT?
ChatGPT is an AI chatbot that was initially built on a family of large language models (LLMs) collectively known as GPT-3. OpenAI has now announced that its next-gen GPT-4 models are available. These models can understand and generate human-like answers to text prompts, because they’ve been trained on huge amounts of data. ChatGPT reached an estimated 100 million users in only two months, making it the “fastest-growing consumer application in history,” according to a UBS investment bank research note. (TikTok took nine months to get to 100million, Instagram 2.5 years.)6
Other similar tools:7
- The leader: ChatGPT
- The runner-up: Google Bard
- Internet-connected: Microsoft Bing AI, Perplexity, YouChat, KoalaChat
- For content writing: Jasper Chat, Chat by Copy.ai, ChatSonic, ZenoChat
- For messaging: Personal AI
- For tinkering: OpenAI playground, Poe
- For fun: AI Test Kitchen, Character.AI
- For coding auto-complete: GitHub Copilot, Amazon CodeWhisperer
- Up-and-coming: Claude
- Build your own AI chatbot with no-code: Zapier Interfaces
How can they be used by businesses?
Using ChatGPT 3.5: (supporting links in footnotes have been added manually)
PROMPT:
Using the sources available to you on the internet, summarize the business use cases for AI and Machine Learning in the Oil and Gas industry (note: v3.5 only has access to sources from the internet before 2021, however it has since been trained by Beta users and thus includes some later source material)
ANSWER:
AI and machine learning have several business use cases in the oil and gas industry, including:
1. Predictive Maintenance: Machine learning algorithms can predict equipment failures and maintenance needs, reducing downtime and costs associated with repairs.8,9
2. Exploration and Production: AI and machine learning can analyze data from seismic surveys and other exploration techniques to identify potential drilling sites and optimize production. 10,11
3. Reservoir Management: Machine learning can be used to improve reservoir characterization and production forecasting, allowing for better resource management. 12,13
4. Safety and Risk Management: AI can be used to monitor safety conditions and identify potential hazards, reducing the risk of accidents and environmental disasters. 14,15
5. Supply Chain Optimization: Machine learning algorithms can help optimize the supply chain by predicting demand and optimizing logistics and transportation.16,17
6. Asset Management: AI can be used to monitor and optimize asset performance, reducing costs associated with maintenance and replacement. 18,19
Overall, AI and machine learning can help oil and gas companies optimize operations, reduce costs, and improve safety and environmental sustainability.
Using the Free ioS Application Perplexity:
PROMPT:
Summarize the business use cases for AI and Machine Learning in the Oil and Gas industry (note: Perplexity provides current, internet-based sources for its answers as footnotes, which were included here.)
ANSWER:
AI and ML have various business use cases in the oil and gas industry. They can be used for exploration, drilling, and extraction [1]. They can also help companies assess the value of specific reservoirs, customize drilling, and optimize production [2] [3]. Additionally, AI and ML can be used to solve industry problems and process big data [4] [5].
1. https://www.birlasoft.com/articles/how-is-ai-transforming-upstream-oil-and-gas-top-use-cases
2. https://www.datarobot.com/solutions/oil-and-gas/
3. https://novilabs.com/machine-learning-in-oil-and-gas-industry/
4. https://www.sciencedirect.com/science/article/pii/S2096249521000429
5. https://www.mordorintelligence.com/industry-reports/ai-market-in-oil-and-gas
For oil and gas companies, digital transformation is a priority—not only as a way to modernize the enterprise, but also to secure the entire energy ecosystem. With that lens, the urgency of applying artificial intelligence (AI) and machine learning capabilities for optimization and cybersecurity becomes clear, especially as threat actors increasingly target connected devices and operating systems, putting the oil and gas industry in collective danger. The year-over-year explosion in industry-specific attacks underscores the need for meaningful advancements and maturity in cybersecurity programs.
To navigate today’s volatile business landscape, oil and gas companies need to simultaneously identify optimization opportunities and cybersecurity gaps in their digitalization strategies. That means building AI and cybersecurity into digital deployments from the ground up, not bolting them on afterward.20
How can they be harmful to businesses?
How can they be harmful to businesses?
Top concerns:
Phishing
Phishing emails and texts will get much more sophisticated. The tell-tale signs of a scam email like misspellings and poor grammar will no longer identify suspicion messages. Access to larger sets of data from any data breaches (think emails, social media, user credentials, etc.) will give models the necessary training materials to craft convincing messages. Social Media Accounts are ripe for data scraping without requiring any kind of breach. Scams will now be personal, specific, and better targeted.21
Deep Fakes/Voice Clones
CASE STUDY:
Using a Chrome Extension: Chrome Audio Capture, I recorded [CEOs] voice from a YouTube clip of him speaking on [an awards show]. I used less than a minute of audio. I uploaded that audio recording to a free online vocal clone tool, and typed the message I wanted to hear in [CEOs] voice. I received the following file in under two minutes. (files are in teams site —internal users only)
[CEOs] voice clone 2.wav
NOTE: I did not script the “um” or “uh” in this audio. The program added it in at its own discretion.
There are far better tools that can be purchased for as little as $10 that would allow a user to adjust inflection, pauses, and more.
Microsoft’s VALL-E only requires 3 seconds of audio.
Cyberattacks
Using Google’s Bard: (associated links in footnotes added manually)
PROMPT:
What type of cyberattacks can AI perpetuate
ANSWER:
AI can be used to perpetuate a variety of cyberattacks, including:
• Phishing: AI can be used to create highly convincing phishing emails that are more difficult to detect. These emails may contain links to malicious websites or attachments that contain malware. 22
• Malware: AI can be used to create more sophisticated and damaging malware. This malware may be able to evade detection by traditional security measures and may be able to self-replicate or spread to other devices. 23
• Ransomware: AI can be used to create more sophisticated ransomware attacks. This ransomware may be able to encrypt files on a victim's computer and demand a ransom payment in order to decrypt them. 24
• DDoS attacks: AI can be used to launch large-scale DDoS attacks that can take down websites or even entire networks. 25
• Data theft: AI can be used to steal data from businesses and individuals. This data may be used for identity theft, fraud, or other malicious purposes.
It is important to note that this is not an exhaustive list of the cyberattacks that AI can perpetuate. As AI continues to develop, new and more sophisticated cyberattacks are likely to emerge. Businesses and individuals need to be aware of the risks posed by AI and take steps to protect themselves from cyberattacks.
Here are some tips for protecting yourself from cyberattacks:
• Use strong passwords and two-factor authentication.
• Keep your software up to date.
• Be careful about what links you click on and what attachments you open.
• Install a reputable antivirus program and keep it up to date.
• Back up your data regularly.
• Be aware of the latest cyberattacks and how to protect yourself from them.
By following these tips, you can help to protect yourself from cyberattacks.
Data Security
Chat bots like ChatGPT allow you to upload data for summarization or analysis. That data is then considered public and is in turn used to further train the model. While OpenAI claims you can “turn off your chat history, allowing you to choose which conversations can be used to train [the] models,” the veracity of that claim has yet to be tested.
A big concern from the Law Org is of course the potential for the accidental uploading of sensitive, proprietary, or privileged information to the various models. Many chatbots can summarize or rewrite information that the user simply copies/pastes into the interface. Even seemingly innocuous helper programs like Grammerly open up questions about data privacy and security.
Intellectual Property, Copyright and Trademark concerns
The laws surrounding these new technologies are still in flux. The extent to which an AI Chatbot can be considered “creators” of original content for purposes of copyright law remains unclear. These tools could inadvertently or intentionally generate content that infringes on the IP rights of others.26 The ownership of the “output” from an AI ChatBot session is murky at best and needs further study.27
When applying this same lack of clarity to that of copyright infringement, resulting concerns can potentially carry criminal liability. 28
The UK’s National Cyber Security Centre (NCSC) issued findings which confirm that “including information in a query will not result in that potentially private data being incorporated into the LLM. “However, the query will be visible to the organization providing the LLM (so in the case of ChatGPT, to OpenAI).”29
Trademark and Brand dilution is also a consideration when assessing the risk of various AI/ML/LLM models. Perhaps one of the lower risk issues, it bears examination.
How is AI/ML being used in the Legal Industry?
Law.com has compiled a digest of coverage of generative AI from Legaltech News and across ALM. James Rodgers in the Law org is currently working on compiling relevant cases as they become available. Much of what is active in the courts at this time focuses on copyright infringement, however cases around data privacy and various issues with terms of service are expected.
Using Google’s Bard, I asked it to summarize one of the more relevant articles:
PROMPT:
Summarize the article found here: https://www.law.com/corpcounsel/2023/03/31/this-is-a-rocket-ship-how-generative-ai-will-transform-legal-departments/
ANSWER:
The article "This Is a Rocket Ship: How Generative AI Will Transform Legal Departments" by David Marcus, published on March 31, 2023, discusses the potential impact of generative AI on the legal industry. Marcus argues that generative AI has the potential to automate a significant portion of the work currently done by lawyers, and that this could lead to significant changes in the way that legal departments are structured and operate.
Marcus begins by providing an overview of generative AI. He explains that generative AI is a type of artificial intelligence that can be used to create new text, images, and other forms of creative content. He then discusses the potential impact of generative AI on the legal industry.
Marcus argues that generative AI has the potential to automate a significant portion of the work currently done by lawyers. For example, generative AI could be used to automate document review, legal research, and contract drafting. This could lead to significant changes in the way that legal departments are structured and operate.
Marcus argues that legal departments that embrace generative AI will be better positioned to succeed in the future. He concludes by calling on legal departments to start thinking about how they can use generative AI to improve their operations.
Other Risks and Considerations
AI and Climate Risk StartUps
A large number of companies are being built using these models to create businesses capitalizing on the need for data driven climate based actions These startups aim to help organizations better understand and predict climate risks, provide climate risk insurance, digitize and automate the carbon offsets market, and more.30
Talent and Culture Risks
Millennials and GenZ are increasingly leaving large corporations for AI Startups, or want to work in companies that give them ample opportunities to learn and use AI and ML technology—even outside the IT department.31
The energy sector (especially traditional E&P) is already struggling to attract young talent. According to a poll conducted by Ernst & Young in 2017, 62% of Generation Z and 44% of millennial respondents found the prospect of a career in the oil and gas industry to be unappealing.32 They are increasingly interested in working for companies that align with their values, concern for climate issues being at the top of that list.33
As demand skyrockets for AI expertise, newcomers to the workplace will be actively looking for companies that will give them ample opportunity to train with these technologies.34,35 Energy companies who wait too long to find ways to embrace the AI/ML tools and technologies available may find future hiring pools even more limited than before.
COMPANY is already using ML
Several departments and platforms here at COMPANY are already using a variety of ML technology. (not an exhaustive list)
- AWS Sagemaker
- Python Libraries
- Excel
- Spotfire
- Enverus
- JetBrain IDEs
- PyCharm
- IntelliJ
- WebStorm
- Thought Trace
Further review of the AI/ML users at COMPANY as well as a comprehensive list of applications, programs, and tools which have been or may be used in the future is recommended.
Recommendations
Competitive advantage is an alluring attribute of innovation; however, AI/ML pace of adoption is extremely critical to success of business objectives and core capabilities. Pace of adoption should not be overshadowed by attributes of innovation. Before adopting AI/ML into a production environment, it is recommended that all applications and programs using this technology be subjected to the current COMPANY standard risk assessment and checkpoint process, as well as any additional reviews deemed appropriate by and for any stakeholders. Additionally, COMPANY should develop an acceptable use policy that helps dictate how and when an employee could use specific AI technologies.
The risk of casual treatment of these tools—especially the ChatBot style tools is serious enough to warrant a careful approach to the technology. While full security blocks might be in order, it is unlikely to keep all users from accessing the internet-based tools.
The COMPANY CyberSecurity Team has identified IT’s top three threats as:
- The potential surge in AI enabled phishing, ransomware, malware, exploits, and cloud Infrastructure attacks which could outpace current cybersecurity defensive capabilities. Therefore, requiring stronger and faster defenses. Organizations realizing the demand and need for stronger and faster AI defenses are emerging as a result. Also, many security companies are developing defensive AIs to assist with the speed and scale of malicious AI enabled cyber-attacks.
- Data Privacy (Data Theft/Leakage).
- AI enabled cyber-attacks which are disruptive to Critical Infrastructure (ICS/DCS/SCADA). This is an area already lagging and with the speed and scale of malicious AI, this presents an even bigger cybersecurity challenge.
Avoiding any siloing of information about or approaches to this new AI/ML landscape is a critical aspect of managing the technology at the enterprise level. Ongoing collaborative efforts and information exchange between CyberSecurity, Law, Comms, OCM, HR, and IT should continue to be a primary focus.
Company-wide communications focused on awareness, education, and caution are recommended immediately. Future communications and educational offerings should likely include advanced skill acquisition around the use and deployment of these emerging tools.
It is key to keep in mind that AI and ML is no longer relegated strictly to the realm of IT professionals. Moving forward, this technology will be disruptive across all areas of the organization, and should be treated not as a fully unknown enemy and source of contention, but rather as an opportunity that must be thoroughly examined and carefully and cautiously employed.
- This report was a collaborative effort between the Law Org, The Cybersecurity Team, and Integrated Performance. Special thanks to [names omitted]. ↩︎
- Methodology notes: As part of this exercise, the author used several available AI tools to help collect and write specific portions of this report, and has annotated those sections accordingly. ↩︎
- AWS – https://aws.amazon.com/machine-learning/what-is-ai/ ↩︎
- AIMultiple.com – https://research.aimultiple.com/large-language-models/ ↩︎
- VentureBeat ↩︎
- TechRadar.com ↩︎
- Zapier ↩︎
- Science Direct ↩︎
- Research Gate ↩︎
- Emerj ↩︎
- McKinsey ↩︎
- Hart Energy ↩︎
- Science Direct ↩︎
- The HR Director ↩︎
- BusinessInsider ↩︎
- Science Direct ↩︎
- Neptune ↩︎
- Science Direct ↩︎
- IoT For All ↩︎
- Full MIT Report ↩︎
- Bloomberg ↩︎
- Bloomberg ↩︎
- CyberSecurity Hub ↩︎
- Flashpoint ↩︎
- IBM ↩︎
- FRBLaw ↩︎
- GitHub CoPilot Litigation ↩︎
- Forbes ↩︎
- NCSC ↩︎
- Forbes ↩︎
- Forbes ↩︎
- DrillingContractor ↩︎
- EY ↩︎
- AI News ↩︎
- Fiverr ↩︎
© 2023 Cortney Arenstein. All rights reserved.
This document and its content are protected by copyright law. Unauthorized reproduction, distribution, or use of any part of this document is strictly prohibited without prior written permission from the author. For permissions, contact the author.
The information provided in this document is for general informational purposes only and does not constitute legal, financial, or professional advice. While every effort has been made to ensure the accuracy of the information, the author assumes no responsibility for errors or omissions, or for any actions taken based on the information provided.
All links are to publicly available articles, blogs, or documents and/or do not provide access to non-subscribers if they require a subscription to view.