Cracking the code: solving for 3 key challenges in generative AI

0
112

By Chet Kapoor, Chairman and CEO, DataStax

Generative AI is on everyone’s mind. It will revolutionize the way we work, share knowledge and function as a society. Simply put, it will be the biggest innovation we will see in our lifetime.

One of the biggest areas of opportunity is productivity. Think about where we are right now – we are dealing with labor shortages, debt, inflation and more. If we do not improve society’s productivity, there will continue to be economic consequences.

With AI, we will see the combined effects of productivity throughout society. In fact, McKinsey referred to generative artificial intelligence as The next productivity frontier. But while technology is certainly a catalyst for productivity, it does not drive transformation by itself. It starts with us – leaders and enterprises. When we bring AI to the organization, companies deploy AI to increase productivity around the world, which in turn drives the company forward.

As with any powerful new technology (think: the Internet, the printing press, nuclear power), there are major risks to consider. Many leaders expressed a need for caution, and some even called for a pause in AI development.

Below, I’ll share some key AI challenges, how leaders are thinking about them, and what we can do to address them.

Overcoming bias

AI systems draw data from limited sources. The vast majority of the data these systems rely on is generated by part of the population in North America and Europe, so AI systems (including GPT) reflect this worldview. But there are 3 billion people who still do not have permanent access to the Internet and have not created data themselves. Bias doesn’t just come from data; It comes from the humans who work on these technologies.

Implementing artificial intelligence will bring these biases to the fore and make them transparent. The question is: How can we address, manage, or mitigate inherent bias when we build and use AI systems? Some things:

  • Deal with bias not only in your data, but also be aware that it can arise from how the data is interpreted, used or interacted with by users
  • Lean on open source and data science tools. Open source can Easing technical barriers to combat AI biases Through cooperation, trust and transparency
  • Most importantly, build diverse AI teams that bring multiple perspectives to spotting and combating bias. As Reed Hoffman and Mala Gabbet recently discussed Masters Strategy Session at ScaleWe must also “incorporate a variety of mindsets toward AI, including skeptics and optimists.”

policies and regulations

The pace of artificial intelligence progress is lightning fast; New innovations seem to happen every day. With important ethical and social questions around bias, safety and privacy, smart policies and regulations around AI development are essential.

Policymakers need to find a way to have a more agile learning process for understanding the nuances of artificial intelligence. I have always said that over time the markets mature more than the single mind. The same can be said for policy, except that given the pace of change in the AI ​​world, we will have to cut back on time. There should be a public-private partnership, and private institutions will play a strong role.

Cisco’s VP of Security and Collaboration, Jeeto Patelshared his perspective in our recent discussion:

“We have to make sure that there is policy, regulation, assistance from the government and the private sector to ensure that this displacement does not create human suffering beyond a certain point, so that there is no concentration of wealth that will be further exacerbated as a result.”

“Machines Take Over”

People are really afraid of machines replacing humans. And their concerns are valid, given the human-like nature of AI tools and systems like GPT. But machines will not replace humans. Humans with machines will replace humans without machines. Think of AI as a co-pilot. It is the user’s responsibility to keep the co-pilot under supervision and to know his powers and limitations.

Shankar Arumugavelu, SVP and Global CIO at Verizon, says we need to start educating our teams. He calls it the AI ​​Literacy Campaign.

“We spent time internally at the company raising awareness of what creative AI is, and also differentiating between traditional ML and generic AI. There is a risk if we don’t clarify machine learning, deep learning and generative artificial intelligence – and also if you use one versus the other.”

And then the question is: what else can you do if something used to take you two weeks and now it takes you two hours? There are leaders who will be super efficient and talk about reducing the number of employees and the like. others will think, I have all these people, what can I do with them? The smart thing to do is to understand how we channel the benefits of AI into more knowledge, innovation and productivity.

As Goldman Sachs CIO Marco Argenti said, the interaction between humans and artificial intelligence will completely redefine the way we learn, co-create and distribute knowledge.

“AI has the ability to explain itself based on the reader. In fact, with the prompt, the reader almost becomes the writer. The reader and the writer are, for the first time, on equal footing. Now we can extract relevant information from a corpus of knowledge in a way that actually follows your understanding.”

working together

We’ve seen leaders calling for a pause in AI development, and their concerns are well-founded. It would be negligent and harmful not to consider the risks and limitations surrounding the technology, and we need to take governance very seriously.

However, I don’t believe the answer is to stop innovating. If we can get the brilliant people working on these technologies to come together and collaborate with government institutions, we can balance the risks and opportunities to deliver more value than we ever thought possible.

The result? A world where productivity is abundant, knowledge is accessible to all, and innovation is used for good.

Learn about vector search and how DataStax is leveraging it to unlock AI capabilities and applications for enterprises.

About Chet Kapoor:

Chet is Chairman and CEO of DataStax. He is a proven leader and innovator in the technology industry with more than 20 years of leadership at innovative software and cloud companies, including Google, IBM, BEA Systems, WebMethods and NeXT. As Chairman and CEO of Apigee, he led company-wide initiatives to build Apigee into a leading digital business technology provider. Google (Apigee) is a cross-cloud API management platform operating in a multi-cloud and hybrid world. Chet successfully took Apigee public before the company was acquired by Google in 2016. Chet earned his bachelor’s degree in engineering from Arizona State University.

Source