China is currently developing a kind of social credit system, not dissimilar to a financial credit system, designed to reward people for doing good deeds and demonstrating honesty and integrity while punishing people for committing untrustworthy acts. The details of this system are still fuzzy; it’s not certain whether it will be a compilation of big data from multiple sources or whether it will be one centralized platform. However, it’s certainly going to rely on big data and high-tech analysis to make evaluations.
On the surface, this kind of system sounds like something out of Black Mirror—a nightmarish evaluative hierarchy that would needlessly segment people into classes and exclude them from living a normal life. But are there merits to such a system, and should we consider implementing one?
Comparisons to Financial Credit
In a sense, we already have a similar credit-based system in place: financial credit, with the FICO score being the main type of credit score considered. Many of the criticisms of a social credit system could also be applied to our current financial credit system, yet we still rely on it for the majority of our loan decisions and have become comfortable with it.
For example, as you’ll see, one of the biggest criticisms for this type of system is the type of limitation it would impose on people with a low credit score. However, there are currently and would always be options for people with low credit; it’s entirely possible to get a credit card if you have bad credit, and it would be entirely possible to build a family and lead a normal life if you have bad social credit.
Similarly, both systems allow you to improve your score over time. Rather than being permanently blacklisted based on one action or mistake, your score is fluid, and will automatically adjust if and when your behavior is corrected.
One major point of contention is deciding which infractions, behaviors, or habits would be most likely to impact your social credit score. Some obvious infractions would be met with near-universal agreement. For example, when someone commits a premeditated, violent crime, it should be reflected on their social credit score. This isn’t much different than recording and reporting standards for felonies as they exist today. Lesser crimes, like jaywalking or littering, would come with a marginally less serious penalty on your social credit score.
The real controversy comes into play when you consider relatively innocent infractions. For example, in China, failing to visit an elderly parent or putting out the wrong items with your recycling could result in damage to your social credit.
The punishments also need to be considered. If the consequences of a low social credit score are minimal, few people would object to the system. But if they’re too lax or are not incentivizing, the system would be practically useless.
Currently, the system will impose restrictions on travel if your social credit falls below a certain mark. You may also have trouble getting your children into private schools, which is much more impactful. Your credit score may also influence your social status; for example, some dating apps in China will publish your social credit score, which could influence how you pursue romantic interests. It could even spur bias in hiring decisions, preventing people from getting a job they might otherwise be qualified for.
Deciding on a System
The biggest issue isn’t with the nature of a social credit system, since most of us act as if there’s an informal one in place already. Instead, it’s with how the system is created and implemented.
For starters, will this system be centralized, with one set of standards for determining how a credit score is calculated and how punishments or rewards are doled out? Or will there be multiple sources of information coming together as one? Either way, who will be making this decision, and how will that decision be implemented?
There are several problems to work out here:
- Which infractions count, and how are they reported? The first question most people have is which social infractions or good deeds are going to count, and how are those infractions going to be reported? For crimes, this is straightforward; in addition to creating a writeup or making an arrest, a law enforcement officer could easily submit a report to the central social credit agency or each of several minor social credit agencies. It would take a lot of time, but would be somewhat clear-cut. Lesser infractions would be a bigger issue, since they would often rely on peer reporting. Peers aren’t typically reliable witnesses, so it wouldn’t take long for the system to be entirely compromised.
- How will discrepancies be resolved? There are multiple discrepancies that could potentially arise. For example, if there’s only one database, what happens if someone sees an infraction reported that they’ve never committed? Or what happens if a single infraction is reported multiple times? More importantly, what happens if you’re using multiple social credit reporting systems, and two of them conflict with one another? A blockchain-like system of verification could help here, but it wouldn’t solve everything.
- Who can “see” or request social credit? Your credit score can only be accessed by certain individuals and organizations, so would your social credit score be similarly protected? For example, would an employer be able to find your social credit score if they were considering hiring you? What about a vindictive neighbor who’s trying to find a reason to have you evicted?
- Who’s creating the analytic system? Bias is prevalent in almost any algorithm. No matter how “smart” your AI is, if it’s been created by humans, it’s going to have flaws. There would need to be a series of checks and balances to make sure the system being created is as fair and unbiased as possible.
- Can social credit be appealed? What can you do to improve or appeal your social credit? Is there a statute of limitations for when and how infractions can be reported? Do infractions expire after a certain amount of time?
The Potential Benefits
Obviously, a social credit system would be incredibly complicated to develop and roll out, so it would need to have some massive benefits if it’s going to be considered.
There are some potential benefits—most notably, the incentive for people to avoid committing crimes and favor doing good deeds. A society where people have a strong reason to engage in honest business practices, or be positive contributors to their environments, is a society that functions harmoniously.
There are also side benefits to consider for companies and organizations; companies would hypothetically be able to hire more trustworthy people, resulting in more efficient operations and better economic growth, and even advertisers could get in on the action, targeting ads to people with high or low social credit, or offering services for rebuilding a social credit score that has fallen to dangerously low levels.
Are the Benefits Enough?
Those benefits aren’t free, however. To get them, we’ll need to consider the right infractions to track, the right methods for processing the data, the right system or combination of systems to record everything, and enough checks and balances to ensure the overall system is as unbiased as possible. Such an undertaking would be ridiculously expensive and require lots of tech talent. It would also likely go through several iterations before we settle on a final setup—iterations which would be problematic for many citizens.
Overall, a social credit system is fine in theory. It has a ton of perks and doesn’t necessarily predict a dystopia. The problem is, there are too many variables to consider for something as subjective as a person’s social trustworthiness, and unless we’re confident in our assessments, we shouldn’t be condemning people to any lifestyle or freedom-based punishment.