AI POLICY THAT SHOULD BE RECEIVING ATTENTION AND WHY?
The world has made tremendous progress on the issue of AI. It is now an essential and an integral part of human life. Different level of societal fabrics is now engaging in the competitive development of strategies, policies and plans for the development, usage and governance of artificial intelligence. The technical progress that has happened in terms of automation powered by machine and deep learning, which is a result of great improvement in the computational power and data that can be used by AI system have increased the usage of AI in real world. This vast area of usage has made AI possesses vast positive and negative consequences for the society at large. It has also resulted in spirited debate on the directions that policies should follow for its management. People like Elon Musk and Stephen Hawking have posited that few companies will dominate the society, while Vladimir Putin is of the view that “whoever becomes leader in this sphere will become the ruler of the world” (Agrawal, Agrawal, & Goldfarb, 2019) . While AI researchers and industry players are responsible for the technicalities involved, it is the government and civil society that will determine the policy framework.
A search into literature confirmed that the development of policies are geographically lopsided, with relative underrepresentation of geographical areas such as Africa, South and Central America and Central Asia (Jobin, Ienca, & Vayena, 2019) . The interest of the countries that led the development of AI is well captured in their policies. The reason for the development of AI policy by the US government is basically rest on two premises attributed to Eric Schmidt and Bob Work, where they posited that “acceleration of AI innovation to benefit the United States and to defend against the malign uses of AI” (National Security Commission on Artificial Intelligence, 2021) . Just as the US, the Japanese government document stated application in four priority areas of health, mobility, productivity and information security; the Chinese aimed at becoming the world leader in production of AI and for its society to be able to adapt its usage. (ENGELKE, 200) .
The reality of the implementation of these policies have resulted into issues around ethical utilization of the power of AI with countries engaging in Arm race, and technology companies and security establishment invading the privacy of the users without limit. This has brought into the light the fear being exhibited by the users and nations states without such technological capacity. In view of this players and government need to develop an ethical approach for the usage of AI and such AI policy should be centres around innovation, trustworthy and respects human rights and democratic values as recommended by the OECD (OECD , 2022)
Uncontrolled usage of AI is very dangerous for the world. This is based on the ability of the technology to be used for ulterous motive. From the security personnel’s that can assess the technology and use it to invade on the privacy of the users, and nations that have the capacity to snoop on the security and socio-economic properties of another state, privacy is no longer guaranteed. While there might be justification for some of these actions, standard must be set for its usage by ensuring that there is a lucid explanation. This will assist in solving the issue of trust that exist between the security agencies and users. Countries of the world will also be able to avoid unnecessary arm race. Lack of trust is what led to the view of the Chinese government allegation against Starlink over thousands of satellites that the company has on the orbit. If this is ensured, the fund been expended on the arms can be available for further research in an area that will directly improve people’s livelihood. Failure to do this will see the Moore’s Law and declining cost of production prompting other nations into acquisition of these technology. And it will make the world loss out on the positive benefits of AI. Developer of algorithm must be able to provide a concise and clear explanation to whatever model has been developed for usage. Doing this is conforming with the letters of the constitutions, as many of these countries have right to explanation stated in their laws and in international treaties.
Furthermore, policy must be able to address inequality in accessibility and application of this technology. The weaponization of advancement in technology by some nations through deliberate policies or sanctions fuel rivalries that has resulted in unhealthy development and utilization of AI. Nations now developed technology to either give them competitive edge or to be at part with other nations. Profiling issue and unequally application to people should be prevented. Wrong profiling of people through feeding of wrong training dataset for technology create wrong learning pattern for AI. For instance, Asian American pay for more test preparation due to price variation and Africa Americans are known to see little opportunity on Facebook due to platform discrimination. (Calo, 2017) These are issues those policies must considered. Though some of these issues might not be intentional, but the dataset been supplied for the training must be ensured to be fair and well represented. Doing this will ensure social cohesion and ensure that everyone get the benefits of this technology. Thereby preventing radical individualism. (Jobin, Ienca, & Vayena, 2019) .
As it is difficult to predict all possible system behaviours and determine the impacts ahead of time, there is need to ensure that safety is considered as part of any policy framework. Consideration must be made against accidental and deliberate misuse of AI. Best safety practices must be included in any of such strategy, even if there is no perfect system. It will be better to set this standard than leave it as open ended.
While trust is an essential factor that stakeholders must ensue, there is need to reduce excesses trust in AI. Although, creating role for human in machine process can reduce the speed and efficiency of the system. The point remains that AI should be developed in a way that people can make their decision about utilization and have the chance to reconsider their decision. This will bring about, the best of the machine and human being.
It is also important that an intergovernmental agency like the one for atomic energy should be put in place to drive the development and implementation the policy that will be based on the above tent. Its presence will assist in monitoring the compliance with the set standard by national players. Nations can then have their own agencies that will monitor compliance within their territories. As a result of this the issues that pervade the development and utilization that is responsible for present divisions that characterized AI can be resolved.
References
Agrawal, A., Agrawal, A., & Goldfarb, A. (2019). Economic Policy for Artificial Intelligence. Innovation Policy and the Economy, 139-159.
Calo, R. (2017, AUGUST 8). Artificial Intelligence Policy: A Primer and Roadmap. Retrieved from SSRN: https://ssrn.com/abstract=3015350
ENGELKE, P. (200, May 19). AI, Society, and Governance: An Introduction. Washington: Atlantic Council. Retrieved from JSTOR.
Jobin, A., Ienca, M., & Vayena, E. (2019, June 24). Artificial Intelligence: the global landscape of ethics guidelines. Retrieved from ArXiv: https://arxiv.org/ftp/arxiv/papers/1906/1906.11668.pdf
National Security Commission on Artificial Intelligence. (2021). Final Report. Arlington, VA: THE NATIONAL SECURITY COMMISSION ON ARTIFICIAL INTELLIGENCE.
OECD . (2022, May 19). OECD Going Digital Toolkit: An overview of national AI strategies and policies. Retrieved from Organisation for Economic Co-operation and Development: https://goingdigital.oecd.org/data/notes/No14_ToolkitNote_AIStrategies.pdf
Comments