Indian Finance Service
The news that an Indian company will be granted a license to operate in the US during lockdown before the pandemic seems extraordinary, so it is in keeping with a theme the industry has been struggling to escape: a perception of artificial intelligence overcoming the limitations of traditional technology.
But beyond its original purpose, why should AI be able to overcome internal logic and limit a human’s ability to address business issues?
When artificial intelligence fails as proposed in projects that are already in the planning stage, however, the blame has been pinned on AI — not on the infrastructure that supports the underlying logic.
AI has been the key part of a number of large-scale failures in tech. For instance, Google’s self-driving car is causing death and injuries among those using it. Its primary purpose as a technology is to avoid accidents. However, some safety features that AI-trained cars such as this one use in real-life scenarios are causes of accidents. For example, a pedestrian cannot be out walking for more than about 5 minutes, and the self-driving car’s redundancy needs to account for such duration. But, when these safety features are added to cars already in the real world, such as in some Tesla models, crash fatality statistics continue to rise. The auto industry as a whole has been exposed to problems due to their cars, which need to act and react more automatically. And China has yet to fully embrace its positive AI potential.
Examples of the impact of malfunctioning AI technology abound in the real world, even though AI rarely (if ever) account for core end-user issues.
For instance, a system designed to produce movies show that AI can override a “human-on-human” logic. The technology for these scenes has been designed to use the end-user’s actions as input into a computer system that prepares the movie sets that actors can use to play in a movie.
This scenario brings to the foreground the question of whether users should have less control over their AI-driven products when the technology is being used in ways they are unable to expect. These kind of scenarios are also discussed in other forms. AI may be able to prevent the collapse of buildings in testing rooms due to failure of water pipes, but the details of an AI that can handle such damage are vague to many. A common design for AI-driven buildings allows engineers to electrocute the inner structure of such structures, exposing the exterior to form a structural skeleton. The danger of the use of such technology in cases of occupational accidents is a pertinent issue in industries such as engineering. The release of products derived from a lack of clear information and safeguards that respect human decision-making and safety even more clearly implies that broad AI consumer implementations will require public debate and ongoing oversight.
AI features that require consideration of human design decisions.
AI technology is already being used in many different consumer settings. Such technologies already serve many clinical customers, such as pharmaceutical companies. They are used to improve clinical care in public hospitals and improve chemotherapy therapy for cancer and other diseases. They are also being used in scientific research, from virtual reality settings such as cataract operations to virtual simulations of medicine when treating disease. India, however, is a perfect testbed for AI experiments. As one of the leading innovation hubs in the world, India stands ready to test AI technologies that will provide real solutions to humanity. Its recent spate of cyberattacks such as WannaCry and Ghazvinov should be used as proof that firms’ tech infrastructure would be overwhelmed by potentially disastrous activity related to AI.
Mumbai seems a perfect place to test AI. For instance, an organization that trains medical professionals can use AI to spot if they are having a heart attack or suffering from another issue. Similarly, an institution that trains thousands of high school students could provide an educational opportunity to students to use AI technology for their courses, either through real-world experiences or online models. For example, using an AI-based avatar called “Demon”, girls and boys can explore the social sciences, apply mathematics to their studies, and explore a variety of emotions. Another approach could entail students teaching one another about the social sciences by using bots. That could inspire interest in STEM (science, technology, engineering, and math) subjects among students in India.
AI technology is undergoing significant and continuous changes. As powerful, computationally resource-rich, and highly-trained as these types of technologies are, they still rely on application software written in ways that millions of individuals cannot understand.
AI technology has incredible potential in medical advice, education, and other realms. Yet given the recent challenges around AI, there is a clear need for broader policy discussion. Technology does not enable blockchain-based smart contracts.
This post was created with our nice and easy submission form. Create your post!