Page 9

Loading...
Tips: Click on articles from page

More news at Page 9


Page 9 466 viewsPrint | Download

By 2026, 80% of large enterprise finance teams will rely on internally managed and owned generative AI platforms trained with proprietary business data, according to Gartner, Inc.

According to Mark D. McDonald, senior director analyst in the Gartner Finance Practice, “The recent entry of large, well-established companies into the generative AI market has kicked off a highly competitive race to see who can deliver revolutionary value first.” McDonald continued, “Leadership teams do not want to fall behind peers; however, as the chief steward for an organization’s financial health, CFOs must balance the risks and rewards of tools like generative AI. There are three distinct conversations that CFOs should have across leadership circles to ensure that reasonable expectations are established, and the use of generative AI creates value without introducing unacceptable risks.”

According to Gartner, these conversations that CFOs must conduct include:

Discussion No. 1: Debunk the Hype to Avoid Inflated Expectations

Generative AI presents the potential for businesses to comprehensively navigate their data’s growing complexity and volume with ease. However, the technology’s limitations introduce several real challenges to this objective, leading Gartner to consider it at a peak of inflated expectations. CFOs should partner with senior technology leadership (e.g., CIO, chief data officer, chief information security officer) to distinguish hype from reality and share results with other executive leadership team members.

Current generative AI solutions represent a collection of modern innovations, including deep learning, natural language processing, reinforcement learning and graph networks, all of which deliver remarkable outcomes. However, the extensive number of parameters and connections used to create these outputs prevent any transparent reconciliation of the algorithm’s response.

This observation includes an inability to determine if the algorithm has developed unstated objectives or if it is basing conclusions on inaccurate, irrelevant, unethical or even illegal information. “Such limitations form the backbone of conversations that CFOs must have with leadership circles when considering the use of generative AI,” said McDonald.

Discussion No. 2: Define Generative AI Use Cases That Are Aligned, Responsible and Actionable

With an understanding of generative AI’s limitations, CFOs can responsibly direct a conversation with management teams aimed at defining use cases. They must collaborate with operational management, executive leaders and representatives from the user community to define actionable generative AI use cases that align with the organization’s overall strategy and risk tolerance.

“As with any AI solution, the best use cases exploit a specific business' strengths and defend its weaknesses,” said McDonald. “Copying use cases from other companies will likely not have the same impact in an organization with different circumstances. Instead, aligning generative AI’s fundamental capabilities to a business' unique strategies and objectives delivers a value that differentiates a company from its competitors.”

Discussion No. 3: Develop Generative AI Governance and Guidelines for Acceptable Use

Generative AI requires human oversight to ensure that outcomes adhere to the nuance of human judgment and fairness. While generative AI’s output may appear human-like and compelling, the results may not always be accurate, unbiased or reliable.

“CFOs should engage legal, HR, audit, security and other relevant corporate support functions to establish usage guidelines to minimize security, compliance, regulatory and other intellectual property risk,” said McDonald. “This discussion must also include the potential impact to the workforce, company culture and necessary training.”


To view or share this content online, use this QR code.