There are several efforts underway by the U.S. government, corporations, and standards bodies to create the guardrails for responsible AI. However, this legislation can only go so far. We are all part of the global community feeding the need for immediacy and the dissemination of information without context or verification.
It’s time to engage in thoughtful self-reflection around how we consume and share information.
Recently, the White House Artificial Intelligence Council met to address the risks of AI on national security concerns. This is the first meeting since President Biden introduced the first executive order on AI 90 days ago. The order called for specific disclosures and risk assessments from developers “of the most powerful AI systems,” including critical infrastructure, as well as alerts from U.S. cloud companies providing computing power to foreign AI training. The White House also announced progress on a series of actions, including more stringent privacy measures, the creation of additional hiring initiatives, and the creation of AI task forces within certain government agencies.
This is an urgent and necessary step to help build trust in solutions incorporating AI. While initial progress has been made, we are still early and in the discovery phase. It will take time for data privacy and security regulations to evolve.
Legislation, disclosures, risk assessments, and task forces alone will not protect us from the risks of AI or enable us to harness the full potential of the technology. It is through education and true societal change that we will succeed in guarding against the risks inherent in misusing AI. Together, we must succeed in removing ourselves from the kind of culture of immediacy that has flourished with the advent of digital technology and which is likely to be exacerbated by the massive spread of these new tools.
As we know, generative AI enables highly viral – but not necessarily trustworthy – content to be produced very easily. There is a risk that it will amplify the widely recognized shortcomings in how social media works, notably in its promotion of questionable and divisive content and the way it provokes instant reaction and confrontation.
Furthermore, these systems, by accustoming us to getting answers that are “ready to use,” without having to search, authenticate or cross-reference sources, make us intellectually lazy. They risk aggravating the situation by weakening our critical thinking. We must look for ways to put an end to this harmful propensity for immediacy that has been contaminating democracy and creating a breeding ground for conspiracy theories for almost two decades.
“30 Seconds Before You Believe” is the fantastic title of a training course created by the Québec Centre for Media and Information Literacy. Taking the time to contextualize and assess how trustworthy content is and having a constructive dialogue rather than reacting immediately are the building blocks of a healthy digital life. We need to ensure that teaching them – in both theory and practice – is an absolute priority in education systems around the world.
If we address these challenges, we will finally be able to leverage the tremendous potential that this technology has to advance science, medicine, productivity, and education.
View Comments (0)