Connect with us

Hi, what are you looking for?

Markets

How Regulating AI Could Empower Bad Actors

A bipartisan group of legislators in the House of Representatives has introduced a bill to establish a national commission on Artificial Intelligence regulation. The move comes after weeks of concerns raised by the boisterous “AI doomer” community, a group that believes AI poses significant risks and could potentially even bring about the end of humanity. While it’s easy to paint Silicon Valley tech behemoths like Microsoft
MSFT
and Google
GOOG
as harbingers of the apocalypse, the reality is these companies and others like them may be the best chance we have to create a flourishing and ethical AI industry.

In other words, better the devil you know than the devil you don’t. These corporations, being on the forefront of AI technology, have the requisite resources, expertise, and reputational interests at stake to guide the development of AI in a direction that is both beneficial and safe for humanity. They are already leading the way in establishing AI ethics and governance principles, so hindering their development most likely means leaving the future of this powerful technology in the hands of unknown—and potentially very dangerous—entities.

Put another way, we shouldn’t jump on the AI regulation bandwagon just yet. One reason often brought up is that excessive regulation in the U.S. would result in other countries, most notably China, taking the lead in AI development. If we think it is a problem that advanced super intelligent AI will fall into Silicon Valley’s hands, imagine what will happen when an authoritarian regime known for its invasive surveillance practices gets hold of it. However, another set of people we should be equally if not more concerned about are malicious actors within our own borders.

By now many of us are familiar with OpenAI’s ChatGPT, Google’s Bard, and Facebook’s LlaMa. But there could also be unknown AI initiatives today operating in secrecy. In fact, if someone is a pioneer in AI development, there are a number of alluring reasons to work clandestinely.

If an innovator achieves a breakthrough and announces it publicly, rivals will quickly learn about it and try to emulate it, possibly leapfrogging the initial success. Similarly, a first mover advantage can be lost if the only “reward” it reaps is waiting a year in line for regulatory approval while competitors catch up. Advancements in AI also potentially promise market dominance, not just including excess profits for a time but also the power to imprint one’s own values onto the technology, which could leave a lasting legacy for the creator long into the future. For this last reason, secrecy is especially enticing if creators hold views society finds controversial or even repugnant.

Who should we fear will get their hands on advanced AI? It might not be the tech giants like Google, Microsoft, or OpenAI who are already adopting responsible AI frameworks and safety standards, but instead the unknown parties operating on the fringes. In such a competitive atmosphere, bad actors might be inclined to bend or even break laws to maintain a leading edge. Consequently, regulations could potentially handicap ethical enterprises, leaving the field open to unscrupulous actors.

Consider too that some open-source AI models are already in the public domain, and regulating or banning them might prove impossible now that this Pandora’s box has already been opened. The substantial computing power needed for state-of-the-art AI today will not be a barrier forever either. Technology is evolving, costs are dropping, and government bureaucracies are always slow in adapting to meet the pace of change.

Like it or not, big tech may be the best friend we’ve got. It is vital, therefore, that we encourage open competition among ethical firms rather than stifling them with overregulation. New government bodies or commissions will only slow down these innovative companies, leaving any potential rivals going rogue to capture the spoils of the AI race.

We want the leaders in AI to be the companies with a strong ethical framework and a commitment to transparency, not the ones willing to bend the rules and operate in the shadows. By rushing to regulate AI, we could inadvertently end up giving the advantage to those we least want to have it.

Read the full article here

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Videos

Watch full video on YouTube

Videos

Watch full video on YouTube

Videos

Watch full video on YouTube