A Multi-part Series on Digital Currency Financial Regulation
Series Overview: Unpacking financial regulations
This series unpacks current and proposed regulation of digital currencies. It will serve as a tool to help me learn the regulatory landscape. With reader feedback, I hope to refine the series into something useful for businesses and policymakers.
This series is not a legal review. I am not a solicitor or lawmaker. Many regulations are in the proposal stage or currently being amended. When this series ends, regulations will continue to evolve.
I will start with the following texts and I welcome suggestions for others:
The EU proposal on Markets in Crypto Assets from 2019
The EU proposal for rules around information accompanying transfers of crypto-assets from 2021
Why is this important?
Increasing Data, Decreasing Individual Control
Computers have greatly increased the data available about individuals and organisations: where we spend our time, who we spend our time with, what we say during that time, who we transact with, and so on.
Control of and access to our data has become centralised because of economies of scale and the scaling laws of networks. China represents an extreme whereby movements, conversations and payments are meticulously tracked.
There are intermittent calls to “take back control of our data”. The question is… how?
Frameworks for taking back control of our data
Consider three different approaches to this problem:
Market forces will result in companies and governments relinquishing data control
It is tempting to entirely dismiss this view. However, privacy has – in some limited ways – been restored by large private entities. Take private messaging apps for example. Yes, WhatsApp may still track metadata, but messages are end-to-end encrypted. That’s more privacy than was there with text messages twenty years ago.
Yet, the amount of personal data held by centralised organisations is increasing rather than decreasing. While certain products offer elements of data privacy and ownership, most data remains out of individual control.
We need governments to protect the data of their citizens
This is the motivation of GDPR in the EU, which aims to protect user data and hold organisations responsible for how that data is held and used.
Markets involving data tend towards being consolidated. These oligopolies are incentivised to hoard rather than relinquish data owing to its monetary value. Accountability to broader interests is desirable and this is what democratic republics aim to achieve. In recent history, democratic republics have established and enforced norms and rights around individual freedoms. It seems natural and sensible for such political structures to contribute here as well.
On the other hand, regulations have trouble keeping up with technology. Regulations can lead to an unpleasant user experience (e.g. cookie banners). And, regulation sometimes moves rather than solves the issue of data centralisation, i.e. regulations may shift data access and control from centralised private companies to centralised governments, à la China.
We need technical solutions to return data control to individuals
In part, data is so centralised because of the convenience this offers. Individuals and organisations don’t like the hassle of managing their data. The question is: Can technology – rather than centralisation – make data control easier?
How does one control data….? Passwords. What do passwords do? Decryption.
Giving people control of their data means encrypting their data and only allowing them access to the password (whether a word, a fingerprint or a combination of approvals). Giving citizens control of their data this means doing more encryption and giving them the keys.
On the flip side, people are bad at minding passwords. And, giving people control of their data makes it harder to monitor crime. If governments can’t see messages, they can’t read criminal plots. If governments can’t see all transactions, they can’t see money laundering.
Payments as a form of Personal Data
I’m dedicating a sub-paragraph to this topic because most people disagree – many very strongly – with me on it.
My position is that the moral case for allowing private payments and the moral case for allowing private conversations are very close, if not indistinguishable.
In Ireland or the US, if you contend that the government should be allowed to listen to private conversations in the home, by voice or by written message – most people would disagree. Although ironically, nearly everyone has a smartphone or Alexa that is listening at all times.
Yet, crimes are planned using text or speech and broad government oversight of text or speech is useful in detecting and/or preventing crime. Likewise, many crimes involve or make use of money, and broad government oversight of all transactions is useful in detecting and/or preventing crime.
Why is it that we don’t surveil (in principle) any private conversation? Why do we afford citizens a right to silence? Why do we afford citizens attorney-client privilege? Why do we have the concept of “innocent until proven guilty”? Yet, for payments, we (in principle) monitor almost every transaction, down as far as 50 Euro in the case of pre-paid credit cards.
In payments, it’s “surveillance until proven guilty”.
The argument against payment privacy is that law enforcement would not be able to do their job. Yet, law enforcement would be much better able to do their job with surveillance of all aspects of private life. Steel-manning the moral case against payment privacy requires an argument that payments are a different sort of thing, i.e. a different category of personal information to speech or actions. If one grants a moral case against payments, e.g. that payments represent financialisation and financialisation is dehumanising and immoral, it would seem odd to use this as a justification for using payments as a form of surveillance. Rather, that would be a case for banning payments altogether. More likely, the extent of payment surveillance in place today is justified by it’s effectiveness as a tool, not by the moral standing of the approach.
Most people seem to disagree, but payments are a form of personal data. Financial access should not be weaponised against those who are innocent until proven guilty.
All of this said, a consequence of taking this approach is that one must answer the question of how – if you take away centralised payment surveillance – can law enforcement and a trusted society operate. This is core topic that this series will return to.
A better way to regulate
I expect my views to change as I read through regulations, so it’s helpful to record them today. The area I’m least clear on is how to think about assets that are not clearly commodities or shares in a clearly defined company. Here are the areas where I have provisional views:
While there may be opportunities for simplification, regulations on publicly traded shares or bonds in companies are needed to protect retail shareholders. It makes sense to have a consistent and clear ruleset apply to companies issuing digital shares or bonds for public purchase.
Central banks should create their own digital currencies, but they should build in privacy up to a substantial threshold of transaction size. If a central bank doesn’t want to issue its own digital currency, it should make clear regulations and the pathway for privately issued forms of the central bank currency that have government oversight and audits.
Any business that custodies digital assets for customers should be required to hold those those assets in reserve and not allowed to lend or sell those assets. These businesses should be be subject to regular audits and make public information on their reserves.
There is much talk of replacing requirements for investor net worth with exams. At a high level, this sounds sensible, but the devil is in the detail. All financial assets are not equally complex. At a minimum there would need to be different tiers of exams. Very quickly, this could start to look like the pathway to being a chartered accountant.
As far as possible, regulatory systems should aim to keep data control close to individuals rather than in large centralised databases that allow for surveillance.
The first image below shows how data on payments flows today. It’s highly simplified. Broadly:
Payment processors (and/or banks) have access to all data. They hold the “key” to seeing the data and controlling flows.
Governments have access via payment processors/banks. Via regulation, they have indirect control over the “key”.
An improvement (diagram below) would be to move the “key” back towards citizens:
The “key” could indeed be a passphrase. In practise, citizens would use software tools to manage their access. For example, they would be able to nominate friends/family or trusted organisations for “key” recovery.
Merchants would have information on transactions with their customers, obviously. There may be exceptions, but merchants should be diligent on who they transact with. This is in the merchant’s interests.
A critical difference in this system design is that payment processors no longer store information on all transactions. They may see some information but they are not used as a tool of anti-money laundering enforcement. In certain cases, the payment processor may just be lines of code – ideally publicly published for transparency. As such, the merchant would “process” their own payments.
The government still sets regulations, upholds laws and has the ability to request information provided due process is followed. This would happen in an analogous way to how information is requested to resolve non-financial crimes. The government would not have default access to seeing all payments without due process being followed.
A critical difference in this system design is that payment processors no longer store information on all transactions.
Very loosely, an analogous approach would be helpful for social media and health data. We need to develop systems and tools that keep data down towards the citizen level and away from the centralised level (whether that be centralised government, centralised big tech, centralised healthcare providers or centralised blockchains).
Importantly, system design is scale dependent. That’s why I say we should keep data “towards citizens” not “at the citizen level”. There may be countries small enough where centralising certain healthcare information makes sense. In other countries, it makes sense to centralise at the scale of the state. At a still smaller scale, there should be communities where it makes sense to share data access.
We should avoid a tendency to accumulate more and more data at the highest centralised level.
Trust is critical to the system, but trust should be fractal, i.e. higher levels of trust/sharing closer to the citizen and lower levels of trust/sharing/data at the highest levels of institutions + plus due process to access that data.
Some notes on terminology
There are a few who believe crypto represents sovereignty over data, including financial data.
Some associate crypto strongly with Bitcoin and prefer to avoid that connotation by referring to blockchain or web3.
Some think blockchain and web3 is 100% speculation and prefer to talk about distributed ledger technology, or DLT.
Most people though, either say they don’t understand crypto or feel it is 100% speculation and/or a scam.
The words crypto and blockchain are heavily polluted. This is not unjustified. However, working with words that are synonymous with speculation and fraud makes it hard to talk about the important political decisions we face around data control.
Instead of “blockchain”, it’s more helpful to talk about databases. Blockchains are a subset that (should) focus on making information hard to fake. Blockchains a) timestamp information relative to other published information and b) make lots of independent copies of it. Blockchain is like Archive.org but with cryptographic proofs linking one piece of content to what was published immediately before and after.
The quintessential meme is Stalin removing secret police officer Yezhov from a photo after turning against him. We know Yezhov was removed because there were other copies of the photo. This is what blockchain does – duplication – but it also does authentication (the indelible chaining with what comes before and after).
It’s not as though other databases don’t or can’t do this to some extent. It’s a spectrum. Amazon servers replicate information in many regions, which makes that information more robust. Although, for authentication you have to trust Amazon and this is reasonable and optimal in a lot of circumstances.
Personally, I prefer “cryptography” over “crypto”. Cryptography is a technology. Moving data control (i.e. cryptographic keys) towards citizens is a political philosophy. The only meaningful sense in which you can own or control data is if you have the password/secret/phone/fingerprint/DNA to control that data, i.e. control of cryptography.
Cryptography is already here. What’s only partially here is better distributing cryptographic control. But, we are making progress.
To restate an earlier point, controlling passwords is not trivial. We need better technical solutions and I’m optimistic we are finding them. There are tighter parallels between banking and crypto and also between messaging and crypto than most realise or like to admit.
Many use encrypted messages on WhatsApp without realising. Losing WhatsApp messages is the analogous problem to losing a seed phrase for a digital wallet.
Banks manage vasts sums of funds electronically. They have timelocks and multi-person approval schemes in place to achieve security. Technology is bringing these approvals and timelocks towards citizens with multi-sig wallets and account abstraction. It’s not about replacing trust, it’s about building trust into digital systems. It’s about doing that at a citizen and community level – not just a centralised level.
Citizens’ Keys: Bringing data control towards citizens.
That’s it for the first piece in this series. If you think others will be interested in following along, you can let them know it’s free to subscribe.