People are always full of fear for new things. When everyone is worried about whether the driverless car is doing more harm than good, AI's innovation in reshaping financial laws has also caused many people to worry about legal and moral issues.
Let a software program decide, who has the qualification to invest in opening an account, who can get a loan (credit), how much it should charge, and even a financial service customer outside the mainstream financial system may have unintended consequences.
But some people think that nothing can be more fair and fair than the algorithm - pure mathematics, after all, there will be no prejudice, right?
"A machine or a robot will make decisions based on facts and standards," said Steve Palomino, director of Fiscal Transformation at Sequoia Software. "Removing human factors means that people can apply standards based on facts and circumstances, not on Preference or personal bias and experience."
Bias problemOne of the flaws of this argument is that the algorithms are also artificially created and often imply biases in the subtleties that they are unaware of. These prejudices may be embedded in the algorithm. For example, last year's Carnegie Mellon University researchers tested how Google's algorithm pushed advertisements to users and found that in the "male" and "female" tabs of the ad settings, Google would recommend "higher-paying" for "male" users. advertising.
To be more extreme, when an AI engine makes a credit decision, it may be inclined to only recognize groups that graduated from Ivy League schools or whose household income exceeds $300,000.
Artificial intelligence, by its very nature, cannot fully control today's rule-based systems. In AI, computers can learn how people usually do things over time: by receiving information, and then making decisions based on these data for credit reporting, and constantly observing the results for risk control. As they continue to learn in the wrong and correct information, they constantly modify their rules and algorithms, and then begin to make their own conclusions.
Amitai Etzioni, professor and director of international affairs at George Washington University’s Institute for Communityism Policy, pointed out that driverless cars were instructed not to speed, but they were also designed learning programs. It also accelerates as the surrounding car accelerates. What they call the standard is to adjust the results based on the surrounding references.
The same principle applies to mortgage decisions made by AI delivery. "Bank tells the program: In any case, race should not be used as a standard," he said. "But the program will find that the risk is related to income, education and zip code - it will say 'ethnicity is one of the factors' Since education and zip code are all linked by ethnic labels, why can't I use race as one of the rules?â€
Artificial intelligence programs lack conscience. Steve Ehrlich, principal analyst at consulting firm Spitzberg Partners, said, “It's hard to code ethics into the machine — its only purpose is to find a solution for the company.â€
How to check and balance?Etzioni said that what we need is an AI custodian, the AI ​​system companion, which ensures that the artificial intelligence engine will not deviate from a certain value.
Will the AI ​​Guardian finally learn bad?
"This problem is an unsolved problem from Plato. In Plato's ideal kingdom, the supreme ruler is the guardian," Etzioni said. "But who will guard the Guardian? Finally, humans must have a closed loop to check each other. . "
This leads to the third problem - the internal operations of artificial intelligence programs are often hidden, and even their creators do not know the principle. In other words, the AI ​​is the same black box. Through the AI ​​system, financial institutions make who can become their customers, and they will become unknowable when deciding who to lend and how much to charge.
These issues are not just about the considerations of traditional financial institutions. Many Fintech companies have adopted black box automation decisions and rely heavily on algorithms. Social Finance (SoFi) has announced that it is “FICO-Free Zone†(FICO Free Zone), which means that the company no longer needs the FICO model to score during the loan qualification process.
But nobody knows what data they use in the algorithm. Ron Suber, CEO of Prosper, a P2P company in the United States, once said that the company analyzed 500 data samples for each borrower, but what is the specific data point? They never say it.
Ehrlich said that allowing AI engines to make financial decisions also raises privacy issues - the acquisition of such data may involve infringement of user privacy.
“If a credit company wants to look at your social media or search history to determine your credit score, banks should at least inform customers that they plan to use this information.â€
Of course, data privacy issues also exist regardless of artificial intelligence. But the success of an artificial intelligence program depends on its ability to analyze massive amounts of data. But unlike IBM's Watson and Google's AlphaGo, banks cannot throw everything into their AI engines.
"Banks should publicize in the upfront what information they are going to collect, how they are collected and where they are used."
Can you rely on it?Another issue with using AI to automate decision-making is whether they will use smart contract technology to automate and depend on it.
“If we're not careful, we may not be able to achieve everything we think we can do with automation,†said a Toronto lawyer who specializes in anti-money laundering rules, anti-terrorist fundraising, and multinational asset recovery, and co-founder of the Digital Finance Institute. Duhaime said, "The reason is that the higher the level of automation we achieve, the harder it is to communicate with humans."
Ehrlich also pointed out that if an automatically generated decision will have negative consequences for the customer, then this requires a protective mechanism.
Ensuring that all data used in the decision-making process is accurate and up-to-date is a special responsibility for the enterprise, unless the user expressly authorizes it, and the company has appropriate technical security measures and privacy protection policies when appropriate. , and only access licensed storage data.
Duhaime pointed out that there is a danger in AI that the technology actually excludes disabled and old people who cannot use computers or mobile devices. “Under such circumstances, we have hit the banner of inclusive finance, but in the end we have shut out most of our customers. We really have no ability to solve the bank's existing problems. This is just creating new ones that will never be solved. Problems with the bank. "
The AI ​​system may also be able to be applied to the creation of technologies for the disabled. "If we cannot serve the disabled market, the technology may create more harm than good in the future."
The utility model relates to a medical atomization treatment and humidifying device belonging to the technical field of medical equipment and household appliances.
Professional Medical Atomization manufacturer is located in China, including Medical Vape,Dose Control Vape Pen,Supersonic Wave Vape, etc.
Medical Atomization,Medical Vape,Dose Control Vape Pen,Supersonic Wave Vape
Shenzhen MASON VAP Technology Co., Ltd. , https://www.cbdvapefactory.com