Is Your Privacy Being Violated by Public Service Automation?
Many governments currently utilise artificial intelligence and are looking into new applications. Is artificial intelligence a threat to your privacy and security?
Artificial intelligence (AI) is as debatable as it is spectacular. It can simplify many aspects of work and daily life, but it also presents certain ethical concerns. Some people are concerned about the employment of AI by the United States government in particular.
Many governments AI initiatives are now in use or development, and some have done a lot of good. At the same time, they pose a slew of privacy issues about artificial intelligence. Here’s a deeper look at these initiatives and their implications for public privacy.
Projects Using Artificial Intelligence and Automation in the Government
The most fundamental use of artificial intelligence in the United States government is the automation of mundane office tasks. In 2019, Seattle adopted Robotic Process Automation (RPA) to handle data entry and application processing. Since then, the city has processed over 6,000 backlogged applications, saving hundreds of labor hours.
Other government AI efforts are more visually appealing. The New York Fire Department is putting Boston Dynamics’ robot canines through their paces to assess structural damage and dangerous odors before firemen arrive. The New York Police Department had intended to use the same robots before the firefighting operation.
Similar technologies are being considered by police departments and other government organisations around the country. However, as this government AI programmes gain traction, their potential privacy flaws become obvious.
Is Artificial Intelligence a Security and Privacy Risk?
It’s unclear if police robots will be used in the future, but things appear to be headed in that direction. These programmes have several advantages, but when interacting with the government, artificial intelligence privacy problems become more significant. Here are some of the most serious problems with these technologies.
Hidden Surveillance
Artificial intelligence is based on data collection and analysis. As a result, more government AI programmes mean that these agencies will collect and keep more data about their residents. Some individuals believe that all of this data collection infringes their privacy and violates their rights.
Technologies like the firefighting dog project are particularly troubling since they have the potential to fool government monitoring. The robot is said to be there to check on safety problems, but there’s no way for people to know what data it’s gathering. It might be equipped with cameras and sensors that scan their faces or monitor their telephones without their knowledge.
Some individuals are concerned that the “cool appeal” of robots would obscure their spying capabilities. In the future, police robots might spy on residents without raising any suspicion because people would perceive new technology rather than an infringement of their privacy.
Unclear Responsibilities
These artificial intelligence and automation programmes also highlight the issue of accountability. Who is accountable if a robot makes a mistake that causes harm? When a government person crosses the line and violates someone’s rights, courts can hold them accountable; but, what about a robot?
This problem may be seen in self-driving automobiles. In certain autopilot crash situations, people have filed product responsibility claims against the manufacturer, while others have blamed the driver. In one example, the National Transportation Safety Board held both the manufacturer and the driver responsible, although it must finally be resolved on a case-by-case basis. Similarly, police robots muddle the waters. If they violate your privacy, it’s unclear whether you should blame the manufacturer, the police, or the human supervisors.
This ambiguity has the potential to stymie and complicate judicial procedures. It may take some time for victims of privacy violations or violations of their rights to receive the justice they deserve. New legislation and legal precedent might help to explain and resolve this problem, but for the time being, it remains unclear.
Data Breach Risks
The deployment of artificial intelligence by the United States government might amplify the AI privacy problems seen in the private sector. Some data collection may be totally lawful, but the more organisations collect, the more is in danger. The information may not be used illegally by the corporation or government, but it may render people exposed to cybercrime.
In 2019, alone, there were almost 28,000 cyberattacks on the US government. These assaults may damage more than just the government if agencies retain more of individuals’ sensitive information. A successful data breach might put many individuals at risk without their knowledge. Breaches frequently go undiscovered, so be sure your data isn’t already for sale.
For example, if a police robot in the future employs face recognition to track down wanted criminals, it may retain a large amount of biometric data from civilians. Hackers who get access to the system may take such information and use it to gain access to people’s bank accounts. Government AI initiatives must have solid cybersecurity safeguards in place if they are not to jeopardise people’s data.
Government AI has advantages, but it also raises concerns
It’s unclear how the United States government will employ artificial intelligence in the future. New safeguards and rules might fix these difficulties, bringing all of the advantages of AI without the hazards. However, for the time being, these worries raise some red flags.
As AI plays a larger role in governance, these concerns grow more serious. State AI programmes have the ability to do a lot of good, but they also have a lot of potential for harm.