Were ideas on AI regulation and privacy adequately addressed at the Summit?
I think there’s some feeling that a lot of the big announcements were business related and not really about AI and governance. There was certainly a lot of discussion about those issues, but at the end of the day nobody is agreeing on some international regulatory agreements. But the reality is that there is actually going to be some substantial AI regulation in India as a result of the data protection.
When that law is in effect, some very complicated analysis will need to be done. Are you allowed to train data without getting the consent of users? There may be ways to do so, but it is going to really be dependent on how the law is interpreted.
Even in the United States (US), the states are all, very quickly — including the home state of all big tech companies — are moving forward with AI regulation. If it’s a reasonable regulation, they can all live with it, but they have got obligations to provide transparency reports to government.
How do we get consensus between regulators and enterprises?
A lot of the proponents of AI and the godfathers of AI are, of course, calling for stricter and bigger regulations. It is not up to us to decide those boundaries, but we need a degree of regulation. One thing that I think will move more quickly, and where India has given it a good push, is bottom-up standards which are a more technical process and really focus on where is there a minimum viable agreement so that enterprises can work together.
The standard is the law. There is a big desire across the ecosystem to not spend a lot of time fighting about contracts but have standards that let everybody work together. We also need standards to include privacy issues. “I am only allowed to use this data according to this law or this set of promises I made to a consumer.”
What’s your take on the DPDP Act and AI governance guidelines in India?
I think it is reasonable to go slow on AI regulation given jurisdictions around the world have to recalibrate whether they are moving too fast or not. But it remains to be seen how the board gets set up, where it affects AI, and where interpretations are going to be critical. How the regulator looks at issues like de-identification and consent, and what is public will frame how India has regulated AI. The Act is there; the words are there, but they are new and we don’t have history on how they’re going to be interpreted.
Is a global AI regulation possible?
That message is clear, and it’s not going to happen because the US says it does not want it and the Europeans say that they want it. The Europeans are now worried that they overregulated AI and are trying to reopen the EU AI Act and their data protection regulations. South Korea, which also has a privacy regulation that has right now no flexibility for AI training, has also proposed an amendment. I think the people who went quickly and went hard are feeling that they’ve overshot and are trying to roll it back.
The US government has taken a stance of not regulating AI. What is the scene on the ground?
This nuance of don’t regulate is rapidly changing in the US. Because what we’re seeing at the federal level is one message, but we already are seeing residents politically pushing back in terms of data centres. We are seeing people worried about electricity costs. The biggest political issue in the US right now is affordability and the price of everything. The idea that maybe AI is playing a negative role in terms of pricing is a cause of concern. In the midterms [midterm elections], it will be the first time that we see AI issues on the agenda of the common voters, and that pushes for a lot of local regulation, so it’s going to be a little more complicated.
Source link
#reasonable #slow #regulations #Jules #Polonetsky

