The Danger Is Not Just From Government Abuse • Protecting Children’s Health


Defender experienced censorship on many social channels. Be sure to keep in touch with news that matters subscribe to our top news of the day. It’s free.

For the past few years we have focused our discuss on facial recognition to use the government, and with good reason. It’s a force and a danger watch use, and in the last decade a many numbers of law enforcement has already begun to use it.

But now we also need to pay attention to another coming out application technology which can be a serious inconvenience for the privacy and safety of private citizens: unrestrained self -use.

Just now called an app PimEyes has garnered attention for its plans to offer consumers access to facial recognition: Subscribers can run photos of any individual they want through PimEyes ’systems, which will try to them in person by going back to every photo found online that can be identical.

Face matches across the web – from blogs, online photo albums, YouTube, even pornographic websites – are accessible to anyone for just $ 30 a month.

PimEyes works by scraping images from across the internet (frequent violation of the terms of use of sites intended to protect users). It’s like Clearview AI – a face recognition company that has a problem with a problem misrepresentation how its tech works – that thought expanding use also to the public consumer.

Expanding the use of facial recognition and placing it in the hands of individuals everywhere presents many serious risks that policymakers must struggle with before it can become an easily accessible one. used for abuse.

Stalking and harassment

The first major danger from consumer-focused face recognition is how it can be a weapon of the occupants and for harassment. It’s already done problem abroad with FindFace, an app made by a Russian company similar to PimEyes. It’s not hard to imagine how a technology that makes your smartphone a personal encyclopedia about anyone you meet in a bar or coffee shop can be abused. For example, individuals who seek to remain hidden from former colleagues who have threatened or committed domestic violence are easily put at risk.

Although despite these problems being equally obvious and previously seen in systems like FindFace, PimEyes dismisses concerns about the risk of using it. The company has SAYS, “Our privacy policy prevents people from using our device for this case.” The answer is that it is less of a plan to prevent or minimize risks because it is a declaration that the company views itself as more than accountable if damage occurs.

Misidentification during background checks and surveillance

One of the most important problems with the use of facial recognition law enforcement is the risk of non-identification, and that is what consumer-focused systems can certainly carry, even if there are different consequences.

Background checks performed by individuals during important activities such as application for jobs, home rent or loan-related services may begin to be accompanied by facial recognition scans. as a usual practice. A quick look at the wrong “matches” from such a service – for example, more recently news at PimEyes it was found that these fake matches from pornographic websites could change-the outcome could be life-changing.

The early use of face recognition for background checks has already yielded detrimental consequences. In the UK, Uber’s use of face recognition has caused many unidentified individuals to lose them. jobs. And more than 20 states rely on background check service ID.me to use face recognition to run fraudulent checks on recipients of unemployment benefits, but this system incorrectly names individuals as fraudsters. , “bringing up widespread delay of life -sustaining funds. ”

Finally, misidentifications may become a risk if consumer-grade face recognition is spread by online content seeking to identify criminals. As a tool that is quick and easy to use but unreliable in its consequences, face recognition can be the perfect storm for web -infested disasters. This field, full of vigilance, has problems as a result kuyaw misidentification.

A case from 2019 shows how posting unequal facial identifications online can evade control: After the bombing of Sri Lanka on Easter in 2019, authorities included a student in US college – based on facial recognition. not the same – on the public list of suspects, causing him to face a wave of death threats.

These risks are all magnified by the fact that facial recognition is often shown to misidentify women and men of color. higher rate, which means that individuals already face the barriers of systemic sexism and racism in institutions from which flat on medical care soon there will be more irrational and changing barriers to life built into daily life.

Public de-anonymizing and doxxing

Another serious risk posed by a consumer face recognition system is how it can escalate efforts to de-anonymize individuals and dox people (a method of the internet age). to disclose the personal information of a person with intent to escape harassment) who has engaged in potentially publicly sensitive activities.

We have already seen this play out in the field of sex work. FindFace is customary de-anonymize and stalk sex workers and adult film actors. In 2019, a Chinese programmer claimed to have developed a behavior recognition program in identification and public catalog 100,000 women on adult websites clearly on can men to keep track of if their loved ones are showing up in adult movies. After the public outcry, the programmer reportedly fell silent the program.

Important public activities can be focused on. Doxxing has already become a weapon against protesters and public health workers, while the tracking of license plates became a long-standing de-anonymizing tactic distributed against destructive patients. If face recognition technology becomes available to the public, these doxxing tactics could be added. An app on any phone will perform any sensitive activity – such as organizing a union, attending a meeting without identifying an alcoholic or attending a political event – one that can leave a mark. of a person.

What actions do we need to take?

Consumer -grade face recognition comes out because current law does not keep our online biometric data safe from misuse. That needs to change.

Face recognition is required to access a database of “reference images”-high-quality images with a known ID to run comparisons-such as mugshot or license databases in driving, limiting the use of government face recognition technology. That is no longer the case.

Scraping the web to collect data from social media is changing this. Companies like Clearview AI and PimEyes use the internet as a reference database, retrieving billions of photos from social media sites. Users don’t approve of their photos being taken as much, and it’s actually prohibited on social media sites: This scraping violates the terms of service for how (even publicly accessible) photos are available.

Just because images are mostly accessible doesn’t mean they “public information”Which can be used in any destructive way a company wants. Users post photos on social media sites that have terms of service that protect how the photos are used.

Violating the terms of harvesting images not only violates the rules for sites like Facebook and YouTube, it violates the users ’consent that the images are collected.

However, the best way forward would not be a total ban on scrapping violating the rules of websites. There are situations where web scraping can be an important public service: It can be used for research, academia and journalism. Preventing these species from using web scraping can be harmful and unnecessary to address this issue.

Face recognition powered by scraping online images stands for its collection of biometric data. Such information is highly sensitive in nature, making mass collection a violation of the terms of use – and without the consent of those with access to biometric information – a unique risk.

There may also be some limited exceptions where face recognition tied to web scraping may be acceptable.

For example, PimEyes claims that it scans adult websites so that subscribers can find and respond to their own images posted as revenge pornography. This seems like a valuable service to provide, but it also defies the norm that largely creates a problem with scraping biometric data: In this situation, the user will scrape the images in true harmony with their captured image. from the web and powered by a facial recognition system.

Because social media companies can’t do more than send stop and stop letters to companies that want to Clearview AI and PimEyes, it is clear that legislators need to act to prevent serious harm from manifesting if these services are available to the general public. Recently introduced The Fourth Amendment Is Not For The Law Of Sale good start. The bill would stop entities from selling data to government agencies obtained by violating the terms of use. We should consider extending this separate provision to cover the sale of data – or at least biometric data – to any source, including private citizens.

Originally published on Government Management Project.

The views and opinions expressed in this article are those of the authors and do not necessarily reflect the views of Children’s Health Defense.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *