Raj Koneru, Founder and CEO, kore.ai.
Generative AI (GenAI) has opened a world of opportunities for businesses to innovate and create value. AI technologies are expected to contribute up to $15.7 trillion to global growth by 2030. Research firm Gartner predicts that roughly 80% of enterprises will likely deploy the foundational models in their production environments by 2026.
Given the vast potential of advanced AI, governments are keen to regulate its development to ensure safe and responsible usage. U.S. President Joe Biden’s executive order on AI, and the U.K. government’s Bletchley Declaration calling for global action to create a safe AI ecosystem, illustrate the priorities of global leaders. The European Union has taken a step further with an AI Act focused on risk and accountability and legalizing ethical AI practices.
It's not enough to just make laws—enterprises hold the key to enforcing AI safety.
As the founder of a company that specializes in applying advanced AI responsibly, I welcome government initiatives promoting AI safety. However, I believe there’s still a long way to go for a truly mature regulatory regime to emerge.
Governments right now are racing to keep up with the rapid evolution of AI technology. Even within the tech ecosystem, the full impact of the AI revolution is not yet fully understood, making it challenging to address all possible threats, opportunities and risks.
MORE FOR YOU
Currently, I think the most crucial task for governments is to create policies ensuring equal access to AI for everyone. They also need to develop legal frameworks that protect intellectual property (IP), copyrights and human liberty, while penalizing harmful actions such as spreading misinformation and deep fakes or sowing bias or discriminating against specific groups based on color, race, language, ethnicity, gender or class.
Enterprises, meanwhile, should focus on implementing technologies that prioritize transparency and fairness. They must build solutions with guardrails that guide the behavior and responses of AI systems. This includes controlling AI outputs and being transparent about the business rules followed, the information collected from users, and the goals or tasks accomplished.
Open-source models can pave the way for a benevolent AI.
I view the idea of a safe and benevolent AI as inextricably linked to greater democratization of technology. Given its transformative potential, AI should be easily accessible to enterprises as well as consumers of all ages.
Today, we are starting to see AI-powered tools being used in every aspect of day-to-day life, to the point where people may be touching those models thousands of times each day.
Who owns the internet? Nobody, right? Governments must make sure that AI isn’t monopolized or controlled by certain groups of people or companies. That’s when I believe the bias can creep in. It could also lead to predatory behavior driven by monetization concerns as opposed to contributing to the common good.
Alternatively, allowing and protecting open-source AI models will help ensure that this technology is more freely available for humanity so that it benefits everybody rather than the few who can capitalize on their existing offerings to become trillion-dollar companies.
Responsible AI is the need of hour.
Self-regulation and responsible use of AI are among the surest ways to success and growth in the AI era. I believe frameworks that promote responsible AI will allow enterprises to harness the power of AI while ensuring fairness, transparency, integrity, inclusivity and accountability.
Over the past 12 months, there has been a strong emphasis on guardrails and transparency when evaluating AI platforms and solutions. C-suite leaders are looking for capabilities that not only meet regulatory requirements but also adhere to ethical standards, enhance user experiences and foster trust.
My company's experience shows that built-in guardrails and validations for secure and ethical handling of user data; user consent; compliance with industry and regulatory standards; robust testing mechanisms; and feedback collection can help fortify AI implementations against misuse.
What do these things mean, specifically?
AI systems should be transparent and explainable. People should know upfront if they are interacting with an AI assistant or a human so they can decide how much they want to rely on it.
As AI tools are employed for various tasks—recommending product suggestions, prioritizing loan applications, diagnosing medical conditions and screening resumes—it is crucial to prevent AI bias and harmful or toxic behavior. Issues like poor AI model training, lack of data diversity, inherent data biases and insufficient supervision and oversight must be carefully monitored and eliminated to avoid active AI discrimination.
To reduce hallucination and deep fakes and ensure the factual integrity of information, avoid over-reliance on pure AI models trained without human supervision. Instead, I recommend improving models through the "human-in-the-loop" process.
Governance controls, tools and processes can enable enterprises to continuously monitor AI systems and rapidly deploy incremental improvements as needed. The ability to quickly implement targeted changes enables early detection and correction of emerging issues.
AI safety is key to the successful use of technology.
Gartner predicts that by 2026, the organizations that implement transparency, trust and security in their AI models will see a 50% improvement in terms of adoption, business goals and user acceptance.
The ability to communicate fairness and transparency of their AI offerings will likely become a key competitive advantage and potential differentiator for businesses because that’s what their customers and partners expect from them.
Companies offering advanced AI capabilities and enterprises implementing large-scale AI deployments will stay on top of the learning curve to ensure AI safety because they have their skin in the game.
While policymakers can create numerous advisories and rules, the responsibility to enforce them ultimately lies with those who implement them.
My experience handling conversational and generative AI implementations globally proves that every penny invested in AI can yield invaluable results when handled responsibly.
Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?