Over three-quarters of senior data and technology executives believe that successfully scaling artificial intelligence into their operations is a top priority, according to a new report by MIT Technology Review Insights.
On Tuesday, the research group of one of the largest technology magazines in the world –MIT Technology Review– announced the report in partnership with the software company Databricks and conducted a global survey of 600 senior data and technology executives in May and June 2022.
Overall, 78 per cent of surveyed executives believed that scaling AI successfully was a top priority.
Some of the organizations surveyed included DuPont Water & Protection (NYSE: DD), Massachusetts Institute of Technology (MIT), MosaicML, Shell PLC (LON:SHEL), Cosmo Energy Holdings (TYO: 5021), the US Department of Veterans Affairs, Adobe (NASDAQ: ADBE) and the University of California, Berkeley.
“By establishing unified and consistent governance frameworks, organizations can navigate the potential risks and maximize the benefits of generative AI adoption,” says Laurel Ruma, global director of custom content for MIT Technology Review.
She added that Chief Information Officers play a pivotal role in ensuring ethical and responsible practices and finding a balance between innovation and compliance.
“With the democratization of AI and the integration of generative models into organizational workflows, we are witnessing the transformative power of AI on a truly enterprise-wide scale.”
Read more: Artificial intelligence can pose a serious threat if unregulated: Ontario’s privacy commissioner
Read more: Huawei launches new artificial intelligence model for industrial applications
Report finds four main key points
The report showed four main findings that include how Generative AI and large language models (LLMs) are democratizing access to artificial intelligence “sparking” the beginnings of truly enterprise-wide AI.
“We can now translate language into something that a machine can understand. I can’t think of anything that’s been more powerful since the desktop computer,” MIT associate professor and advisor for MosaicML.
The report also found that companies are increasingly interested in using open-source technology to create their own large language models (LLMs), as this allows them to utilize and safeguard their own data and intellectual property.
Automation anxiety was also shown to be overblown but should not be ignored, according to the report. AI tools that can generate content are already capable of handling a wide range of complex tasks.
However, according to CIOs and scholars interviewed for this report, they don’t anticipate these tools posing a major threat to job automation. Rather, they predict that these tools will free up the workforce from tedious tasks, allowing them to concentrate more on areas that contribute more to the business.
The last finding was how proper rules and guidelines are what allow AI to move forward quickly and safely. AI that can create content brings both business and social risks, like protecting important business secrets, avoiding copyright issues, dealing with unreliable results and handling harmful content.