While there is no doubt about the potential of Artificial Intelligence, the consequences are also critical to understand and counter. With AI advancing rapidly and already integrating into various organizations and processes around us, we need to analyze how this is going to affect us and the processes around us.
We talked to experts in the field to get a better grasp a better understanding of the adverse outcomes of AI.
Unreliable Screening of tenants: “An increasing number of landlords — across the private and public sectors — require prospective tenants to apply through AI-based screening tools. Marketed as a way to select reliable tenants and streamline the application process, and for tenants to ‘improve’ their tenancy behavior, the algorithms assign a ‘score’ that reflects its prediction of how likely a tenant is to cause damage to property, to be late on their rent (including by reason of actual or implied health or wellness issues that might prevent regular work), or to vacate before the lease expiry date; and often evaluate a property’s “suitability” for a prospective tenant.
“Many of the tools go well beyond conducting credit checks: they demand copies of government-issued photo identification. Some require biometric information and use information gathered from public social media content and information accessible via the internet — that landlords are prohibited from asking for under human rights and anti-discrimination laws, and which is unrelated to the tenancy (i.e., commute information).
Violation of Privacy: “The apps’ “privacy policies indicate that prospective tenants’ personal information (including information about children and other dependents) is shared with advertisers, with no way for individuals to opt-out of that occurring.
Creating Homelessness: “Tenant screening tools also produce ‘rental performance’ databases that can serve as a blacklist and hamper renters’ ability to qualify for housing. Tenants have little recourse to verify if information amassed about them is correct or have it corrected, and no way to know how decisions about them are made. But if they want to have any chance of renting from a landlord or rental agent who uses automated tools, they have no option but to agree to the collection of whatever information the app requires, and to submitting to whatever demands the landlord makes. That’s not consent; that’s coercion. And that contributes to the homelessness epidemic.”
Sharon Polsky is president of the Privacy and Access Council of Canada; a Privacy by Design Ambassador; Vice-Chair of the CIO Strategy Council Technical Committees for Privacy & Access Control Standards.
“The most promising AI technology out there today (like GPT-3 by OpenAI) are language models. They work a lot like the autocorrect on your phone. If you type in I'm going to take the dog to the [BLANK], the AI will calculate the most probabilistic outcome (park, beach, vet). The problem with this type of AI is that it enhances our biases in unintended ways. Amazon built an AI to sort through resumes A few years ago. They had to scrap the program after the AI started eliminating women and people of color from consideration. And that sounds horrible, but the AI wasn't wrong. Amazon was more likely to hire white men and so the AI started sorting for it. It's a lot like how your phone always wants to autocorrect to curse words. It's not probabilistically wrong, but you really wish it wouldn't do that. This type of enhanced bias is going to be a problem in every industry that utilizes AI in a major way.”
Shaun Poore, ShaunPoore.com
“Artificial Intelligence is a booming technology having the power to compete with human brains. It can automate enormous tasks, sort volumes of data efficiently and accurately. Despite having magnificent benefits, it also has consequences that could be dangerous and problematic in various ways. To talk about a few, some of them are;
Loss of Employment: “The primary one is fear of job loss. As AI has the caliber to automate most of the things that humans find significant, there would be spurred job losses.
Privacy violation: “This is another crucial consequence that causes great terror among businesses and nations. By utilizing its power seamlessly, AI can extract and access sensitive information that could impact negatively.
Automated Weapons: “Automating weapons with AI can be perilous in the defense system. A minor concern in the AI setup can lead to terrifying results. Other fearful activities such as hacking, and AI terrorism could also possible.”
Kapil Panchal, Technical Content Writer iFour Technolab Pvt. Ltd.
“Artificial intelligence is growing daily in the digital industry, especially in the IT sector & software industries. Although it’s a boon to these industries, they do have a dark part, which is being discussed here.
Loss of Employment: “The main threat it poses is the loss of specific jobs. As AI is designed to minimize human involvement, it is likely to cut many MIS-based jobs.
Bias: “The other major issue with AI is it could be biased. As humans design algorithms, it is likely to have a built-in bias either intentionally or inadvertently introducing themselves to algorithms. Thus it needs a responsible professional person to develop with utmost care.
Prone to Hacking: “AI systems are more prone to hacking like phishing, malware attacks, etc. This is because they don’t have enough real dilemmas to deal with. Also, privacy is under threat. As AI poses disproportional power & control over data, it is likely to threaten the confidentiality of user data and critical organizational data. It will be easy for hackers to access data once they breach the initial protocol layer & if it’s based on a cloud environment, it’s more prone to get attacked.”
Miranda Yan, Co-Founder of VinPit