Sundar Pichai, Google CEO, leader of one of the biggest artificial intelligence (AI) firm across the globe, said in a statement this week that fears about dangerous uses of technology are “very legitimate” – however, the technology commerce should be trusted to sensibly control its application.

On Tuesday afternoon, Pichai claimed that latest AI devices – the back of such inventions as disease-detecting algorithms and driverless cars – need firms to set moral principles and think about how tech can be harmed.

“I think tech has to realize it just can’t build it and then fix it,” Pichai claimed. “I think that doesn’t work.”

Tech titans have to safeguard artificial intelligence with “agency of its own”, doesn’t damage human race, Pichai claimed. He further said that he is positive about the long-term advantages of technology. However, his understanding of the possible dangers of AI goes with some tech criticizers, who declare the tech can be utilized to authorize deadly weaponry, invasive surveillance, and the spread of misinformation. More tech managers, such as Elon Musk, SpaceX and Tesla founder, have predicted more dire consequences that AI could attest to be “far more dangerous than nukes.”

Pichai said in a statement that legislators across the globe are still looking for holding effects of artificial intelligence (AI) and the possible necessity for government regulation.

“Sometimes I worry people underestimate the scale of change that’s possible in the mid- to long-term, and I think the questions are actually pretty complex,” he claimed.

More tech titans, such as Microsoft, lately have grabbed AI’s regulation – both by the firms that develop the tech and the governments that look after its application.

Pichai also explained that artificial intelligence, if properly taken care of, could bear “tremendous benefits”, including assisting doctors with identifying eye sickness and other diseases via computerized scans of health data.

“Regulating a technology in its early days is hard, but I do think companies should self-regulate,” he claimed. “This is why we’ve tried hard to articulate a set of AI principles. We may not have gotten everything right, but we thought it was important to start a conversation.”

In January, Pichai called artificial intelligence (AI) “one of the most important things that humanity is working on” and claimed that it can turn out to be “more profound” for the world than “electricity or fire.” However, the competition to make the best machines that can run on their own has revived known concerns that Silicon Valley’s corporate spirit — “move fast and break things,” as Facebook once described it – can cause imperfect, powerful tech decreasing jobs and hurting people.

Pichai compared the early works to fix the parameter for artificial intelligence with the attempts of the academic community during the early period of genetics research.

“Many biologists started drawing lines on where the technology should go,” he claimed when talking about the academic community. “There’s been a lot of self-regulation by the academic community, which I think has been extraordinarily important.”