Recent days have been absolutely brimming with technological advancement, with significant buzz in the realm of artificial intelligence, focusing particularly on its governance. Notably, the President of the United States issued an Executive Order, alongside the emergence of the Bletchley Park Declaration — an articulate vision for the desired utilization of AI, endorsed by 27 nations.

Both of these initiatives are commendable in their foresight, but they share a common limitation. Their aspirational nature lacks the teeth of enforcement, particularly in the private sector, where the risk of misuse is most acute. This isn’t surprising. After all, the intricacies of American law dictate that enforceable regulations require legislative intervention. In the U.S., an Executive Order’s jurisdiction is limited to the Executive Branch of the government, while the power to regulate the citizenry resides within the Legislative Branch, namely the U.S. Congress.

Yet this is only one portion of a complex narrative. In essence, the pace at which artificial intelligence is advancing is staggering. It is evolving at such breakneck speed that even the Executive Order, drafted a mere week ago, already exhibits notable gaps that fail to encompass more than the evident and immediate concerns. Take, for instance, the section addressing the misuse of AI. It highlights:

I hereby direct the Secretary of Commerce, within 90 days of the date of this order, to: (iii) Determine the set of technical conditions for a large AI model to have potential capabilities that could be used in malicious cyber-enabled activity, and revise that determination as necessary and appropriate. Until the Secretary makes such a determination, a model shall be considered to have potential capabilities that could be used in malicious cyber-enabled activity if it requires a quantity of computing power greater than 10^26 integer or floating-point operations and is trained on a computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum compute capacity of 10^20 integer or floating point operations per second for training AI.

Our understanding of artificial intelligence and its deployment is based on a fluid landscape that is continually reshaping itself. The notion of how AI models operate has dramatically shifted from just a year ago. Today’s malicious actors are unlikely to leverage a large-scale, public model like GPT-4 for nefarious purposes. Instead, they are moving towards a constellation of smaller models that can be run on everyday hardware — think of the above-average laptop — or via distributed networks. They may employ sophisticated agent networks such as LangChain, AutoGen, or AutoGPT to orchestrate their actions. This approach has not only emerged but has been in play for several months now.

The pressing issue here is the attempt to govern the technology in its essence. Yet, this is a proverbial genie that cannot be put back into its bottle. Even if every major AI provider in the U.S. were reined in, we’re looking at a global tapestry of thousands of AI models, including those developed by sovereign entities, like the Falcon model from the United Arab Emirates, and the GLM-130 by Tsinghua University in China. The advancement of the technology has surpassed the point of no return.

This brings us to the need to regulate the outcomes of AI use. When it comes to criminal activity, the solution is more clear-cut: enforce existing laws firmly and vigorously. Using AI to commit fraud is, undeniably, still fraud. Impersonating someone with malicious intent remains impersonation, irrespective of whether it’s executed by a seasoned actor or an AI. To deter AI-enabled misconduct, one could consider introducing enhanced penalties — use AI for wrongdoing and you face the standard consequences plus additional sanctions.

Then there are the grey areas — those not distinctly criminal but ripe for abuse, such as the potential of language models to strip away anonymity through stylometric analysis. A recent paper (yet to be peer-reviewed) outlines this very issue. Here, vigilance is key, as is the independent evaluation of risks. While possessing certain datasets might not breach any laws, their misuse by bad actors can lead to legal transgressions. Take, for example, an unscrupulous insurance company that uses social media data to de-anonymize and discriminate against its policyholders. The infraction occurs not at data acquisition but at the point of misuse. Until that line is crossed, our role involves rigorous testing to assess potential harm and the feasibility of misuse.

These considerations are crucial, yet the greatest challenge before us isn’t something technology alone can address. Technology may facilitate the problem, but it’s not a preventative tool. What we’re grappling with is the erosion of reality. Many have encountered videos produced by companies like HeyGen, where a person’s image is harnessed to create a convincing fabrication of them voicing statements they’ve never uttered. HeyGen has refined this technology — though the underlying code is open-source — by enhancing it with a user-friendly interface.

In today’s media landscape, we are constantly confronted with the task of discerning the authenticity of content. If you come across a video of a politician making a statement, it’s now entirely rational to question whether that statement was actually made by them. This challenge is compounded by our tendency, in a deeply divided society, to accept information that aligns with our beliefs, often without questioning its veracity.

As we face the growing deluge of information that we encounter daily, the remedy isn’t straightforward — it’s a commitment to critical thinking and investigative diligence. It’s resisting the urge to accept information at face value, especially when it aligns with our preconceived notions. Instead, we must engage in due diligence, seeking out and verifying sources, questioning established narratives, and cultivating a circle of trusted experts. And perhaps the most demanding task is to remain open-minded, ready to revise our understanding when confronted with incontrovertible evidence.

This exemplifies a paradigm shift in our approach to processing information. We must evolve from being mere passive recipients of content to becoming active interrogators of information. A lesson we must instill in future generations.

Test yourself in this new reality. Use accessible generative AI tools to craft a narrative contrary to known facts, and observe how easily a falsehood can be fabricated (while ensuring that any such creations are ethically disposed of or clearly marked as fictitious). Understanding the mechanics of misinformation demystifies it, diminishing its effect and fostering a healthy skepticism. Could that really be Taylor Swift or Volodymyr Zelenskyy, or is it the work of digital impersonation? What signs might reveal the authenticity or falsity of such content?

The fight against AI-propagated misinformation isn’t one that technology alone can win — today’s AI detection systems are unreliable, akin to flipping a coin. The real battle is with ourselves, our habits, and our willingness to question. It requires a human-centric solution.

For organizations, influencers, and individuals, one of the most vital actions is to create channels of authenticity. Make it straightforward for others to confirm whether a statement truly came from you. Transparency is the bedrock of trust; by being open about AI usage, you earn credibility, so when falsehoods emerge, your history of transparency upholds your integrity. Nurture a supportive community that can act as a first line of defense against misinformation, and prioritize building a reputation of trustworthiness. Something that can help to differentiate truth from falsehood.

The boundary between what’s real and what’s not may be increasingly obscured by technology, yet the fundamental principles of trust endure. There’s much we can do to arm ourselves and our communities against misinformation. Each of us plays a role in addressing the societal challenges amplified by AI’s power to blend fact with fiction.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Mind Vault Solutions, Ltd.

Subscribe now to keep reading and get access to the full archive.

Continue reading