Our Encrypted Minds

Our Encrypted Minds

9th Jun 2020
Reading Time: 21 minutes
Philosophy, Ontology

In a previous post, I discussed the Three Worlds, an ontology that distinguishes between three different types of existence: the objective, the subjective, and the inter-subjective. Objective reality exists independently of any consciousness; subjective realities are our own unique, personal experiences; and inter-subjective realitites are realities we co-construct with other conscious beings. This post will deal primarily with the distinction between the objective and subjective worlds; the boundary between our brains and our minds.

The influential 17th century philosopher René Decartes developed an ontology called mind-body dualism, which explicitly defined the mental realm as immaterial and independent of the physical world. Descartes' dualism proposed that the physical and mental world were connected through the brain, but that minds had an independent existence of their own. In his ontology, the mind was merely disconnected, not destroyed, when the body died, and would continue on... somewhere.

The notion of separate, independent existence of the mind and body is not what this post is about. There is ample evidence that our unique mental lives are in fact created by the physical processes of our brain, and therefore our subjective experience depends on the objective arrangement of matter between our ears.

However, this close relationship does not necessarily mean it will ever be possible to reverse-engineer the nuances of our subjective experience and describe its physical components objectively. Our minds are emergent phenomena—more than just the sum of their parts—and explaining how our brains work does not mean we can do away with exploring our lives subjectively.

The Problem with Simple Physicalism

Have you ever loved a book or a movie that nobody else seems to like? What's it feel like to take a bite of your favorite food? How many songs remind you of a specific time in your life? Are you sure you see the same green I see?

Philosophers call these individual subjective experiences qualia. They make up much of our conscious experience in life, but there's no way to directly experience someone else's qualia. We can only do our best to describe our subjective feelings and experiences to each other through language and art, but that has many limitations. We can never explain the colors of sunset to someone born without sight or music to someone born without hearing. It can be difficult to explain what it's like living with chronic pain, or all the emotions tied up with a single bittersweet memory.

Imagine for a moment that neuroscientists discovered that our central nervous systems translated our feelings and senses into a microscopic particle, one that contained the pure essence of that experience. I'll call these imaginary particles ideons. Suppose when someone looked at a green field, the brain generated "green" ideons. Or when they felt a rough suface, it would generate "rough" ideons. Tasting something salty would generate "salty" ideons, and so on.

Then suppose that we discovered that our brains built more complex thoughts and memories by connecting ideons together into packages of associated traits called memocules. These memocules would contain a complete experience by combining ideons representing all of the feelings, sensory perceptions, and associations that created it. So, for example, the concept of a red rubber ball might correlate to a memocule containing "red", "round", "firm", and "bouncy" ideons. The concept of a spring day might be a memocule containing "bright", "warm", "breezy", and "fragrant" ideons. Our memory of a particular spring day might be a memocule composed of ideons that added up to "a bright warm breezy day that smelled of flowers and I bounced a red ball".

If such particles existed, it would instantly resolve many philosophical questions, as well as opening up possibilities for incredible technologies. We'd know for sure that the same green you see is the same green I see, because we could isolate the "green" ideon. Culinary science would be as rigorous a discipline as physics. Murders could be solved by extracting memories from the victim. You could learn the Theory of Relativity from copies of Einstein's own thoughts.

Such a discovery would be a victory for physicalism, which is the philosophical position that everything can be explained as physical objects and processes. Ideons and memocules would be a particularly literal validation of physicalism: mental objects would be physical objects, because every qualia would correspond to a memocule. Such a discovery might not fully explain everything about our mental world, but it would be a massive step forward.

Unfortunately, what we know so far is that our brains are nothing like this simple fantasy of ideons and memocules. We've learned a lot about the brain over the past few centuries, including which structures in the brain are associated for specific functions. Much of this progress was slow, gleaned by studying brains damaged in traumatic accidents. Despite the advances in our understanding, the truth is that we still don't understand the physical process our brains use to create the mental experience of our subjective reality.

Mind Emerges From Matter

What we do know is that our brains contain nerve cells called neurons, which communicate to each other via chemical connections called synapses. Currently, we believe that the human brain has about 86 billion neurons1, with about 16 billion of them in the cerebral cortex. This structure is unique to mammals and is presumed to be where the most complex aspects of our conscious experience reside.

A neuron can have thousands of synaptic connections, and we estimate that the neocortex (the layers of the cerebral cortex most associated with higher brain functions) contains around 150 trillion synapses2. Together, these interconnected neurons form a neural network, and the activity of this neural network somehow creates, well... us.

There are significant difficulties involved in studying brains while in use. One approach is the use of a functional magnetic resonance imaging (fMRI) technique called blood-oxygen-level-dependent (BOLD) imaging. This approach exploits a particular trait of neurons—unlike many other cells, neurons don't have their own supplies of sugar and oxygen they can use to create energy. Instead, when neurons need energy, blood is directed to those neurons, and this blood flow can be detected by fMRI scanners.

The BOLD technique has generated many very interesting results—reconstructing movies3 from the resulting neural activity, mapping how language is stored in the brain4, and detecting the experience of chronic pain5. Many of these scientific studies result in sensational headlines about scientists "reading minds", but there are significant limitations to this technology6.

Even if the resolution of fMRI scanners could be improved, BOLD imaging is only measuring blood flow as a proxy for neural activation, not the neural activity itself. Many other scanning technologies are also used for studying brain activity7, but they all have practical limitations as well.

It may be that we eventually uncover all the secrets of how the brain works. It would be beneficial in many ways for us to discover the precise relationship between our mental realities and physical reality. Or, it could be that some aspects of this relationship will remain forever mysterious to us. But how is it possible to believe that our subjective experiences—our qualia—are produced by our physical brains, if we can't figure out how one maps to the other?

Although making analogies on a complex topic like this carries the risk of only adding to the confusion, computer technology offers us a rich source of metaphors that may help us think about these issues. Computers, like our brains, store and process information using physical hardware. Unlike our brains, we know a lot about how computers work.

Conversion

The first and simplest type of data transformation is conversion. When we convert something, it just means we go from one way of representing the information to another. A familiar system of conversion for most people is units of measure: 12 inches is the same thing as 1 foot. Half a gallon is the same thing as two quarts. Two cups is the same amount as 16 fluid ounces. When converting between different units, the same amount of stuff can be represented in a different way. In the following table, each row is a representation of the same quantity:

fl. oz cups pints quarts gallons
1 1/8 1/16 1/32 1/128
8 1 1/2 1/4 1/16
16 2 1 1/2 1/8
32 4 2 1 1/4
128 16 8 4 1

Another example relevant in computer science is that numbers can be represented by a variety of numeral systems. The most familiar numeral system to most readers is called the decimal system, which represents numbers using ten numerals (0–9). When counting using this system, we start with 1 and increment through 2, 3, 4, and so on all the way up to 9. When we add one more to 9, we change the 9 to a 0 and write a 1 in the next column: 10. Because this number is where we stop adding new symbols and start reusing the ones we have, we call it decimal (Latin for ten) or "base 10".

Many other numeral systems exist; common ones include binary (base 2), octal (base 8), and hexadecimal (base 16). The binary numeral system uses only the numerals 0 and 1. Octal uses only 0–7, and hexadecimal uses 0–F, where the letters A through F represent the numbers 11 to 15. No matter which numeral system is being used, the same number is being represented, just in a different way.

Converting between numeral systems does not change the value of the number, only its representation. For example, in binary, the number 2 is represented as 10, because that numeral system only uses the numerals 1 and 0. But 10 in binary still represents the number we call two. In the following table, each row contains the same number, represented in four different numeral systems:

decimal (base 10) binary (base 2) octal (base 8) hexadecimal (base 16)
1 1 1 1
2 10 2 2
3 11 3 3
4 100 4 4
5 101 5 5
6 110 6 6
7 111 7 7
8 1000 10 8
9 1001 11 9
10 1010 12 A
11 1011 13 B
12 1100 14 C
15 1111 17 F
16 10000 20 10
20 10100 24 14
26 11010 32 1A
32 100000 40 20

So, how can we use conversion as a metaphor for talking about minds and brains? Well, if something like my imaginary system of ideons and memocules were actually true, it would mean that the difference between mental and physical objects would only require a conversion from one representation to another. Based on what we've discovered about the brain so far—neural networks and such—it seems that something much more complicated is going on. Conversion seems like it's probably not a great metaphor for the relationship between the physical and mental worlds.

Encoding

Because computer processors operate by switching semiconductors between two discrete states, they use binary (base 2) numbers for everything they do. However, there's no objective way to convert the letters and symbols of everyday written language to numbers. There is no natural numerical equivalent to the letter A, because A is an arbitrary symbol that only has meaning to humans.

Computers can't understand letters, only numbers. In order to store the letter A as a number so the computer can use it, it's necessary to have an index the computer can use to look up and store non-numeric symbols as numbers. The process of converting from one type of information to another using an arbitrary scheme like this is called encoding.

The secret codes that school children develop to write notes to each other are a kind of encoding. Kids develop an index that maps each letter to another, allowing A to be encoded as N, B encoded as Q and so on. If the teacher confiscates their note, it will be difficult for them to decode the meaning without that mapping. The process for computers is the same thing, except the letters are encoded as numbers instead of other letters. A mapping of letters and symbols to numbers for computer use is called a character encoding.

One of the earliest and most widely used computer character encoding systems is ASCII, which developed out of telegraph codes and became the basis for many subsequent character encodings. In ASCII, the capital letter A is represented by the bit string 01000001, which is equivalent to the number 41 in base 16 and the number 65 in base 10. A lowercase letter a is represented by 01100001, which is the number 61 in base 16 and the number 97 in base 10.

Complete ASCII tables are easy to find online8. Computers have ASCII code built into them so they can encode and decode human languages as binary numbers. When they are reading an ASCII-encoded file and see the pattern 11110001, they know to render the character ñ, and if you type an ñ on your keyboard, the computer records that in its memory as 11110001.

The mappings of letters to numbers in ASCII or any other character encoding are completely arbitrary; some number had to be picked for each letter and symbol, and these are what was chosen. Encoding and decoding are simply the processes of substituting one representation with another by looking them up in an index.

Does encoding provide us with a better metaphor for the relationship between our brain and our mind? The process that our brain uses to store and use information is often casually referred to as encoding. Although most people who use the word probably only mean that mental information is somehow represented in the brain, there are serious implications if we explore this analogy further.

If the information in our brains is encoded, then it could be decoded. If only we knew the code! If it turns out, for example, that our brains' neural networks have a very standardized design—a common mapping between physical structures and mental objects—it would open the door to mind-reading technologies once we find that code.

If the encoding between our brains and our minds could be figured out, then it might be possible to change opinions, alter memories, copy entire personalities, and reprogram consciousness in unprecedented ways using technology. All that would be required would be to change the brain in ways that we knew would change the corresponding mental information. Conversely, we could learn how to rewire our physical brains using mental techniques.

While it is true that changes to the brain produce changes in the mind, and that certain mental choices can create changes in the brain over time, these don't seem to be equivalent processes. Certainly there are no indications that there's a simple mapping between physical and mental processes. To find a better metaphor for the relationship between minds and brains, we'll have to keep looking.

One-Way Functions

Another data transformation metaphor might be one-way functions. These are mathematical processes that can't be reversed—they transform data in one form into something entirely different. This seems similar to the way our brain somehow produces our mind but the process doesn't seem reversible, so let's consider this in more detail.

How do one-way functions work? Math obviously has many operations that are exact opposites of each other: addition and subtraction, multiplication and division, and so on. However, it turns out to be easy to find mathematical operations that are not so easy to undo. For example, if I secretly add two numbers together and show you only the result, you'll have a very hard time figuring out which two numbers I started with!

For a more elaborate example, say I took the phrase Hello World! and encoded the letters using ASCII encoding. Then I told a computer to treat the string of bits as integers, add them up, and return the decimal equivalent of the sum.

Hello World! encoded to binary using ASCII encoding looks like this:

01001000 01100101 01101100 01101100 01101111 00100000  Hello
01010111 01101111 01110010 01101100 01100100 00100001  World!

The first block of eight 1s and 0s in binary, 01001000, is the ASCII code for a capital H. But it can also be interpreted as a number by converting the binary to decimal, as the number 72. The last byte, 00100001, represents the exclamation mark in ASCII, but could also be converted to decimal as the number 33!

Because the computer can only store and process binary sequences, the codes for our English letters are really just numbers! They can be decoded back into English letters using the ASCII code, but they can also be treated just like numbers. If I simply add up all the numbers that comprise Hello World!, they add up to 1,085.

Now suppose I gave you the number 1,085 and asked you to tell me how I got it?

Don't waste your time trying—there is actually no possible way to take only the number 1,085 and convert it into Hello World!. The ASCII code table doesn't have an entry for the number 1,085. The process of treating all those bits as numbers and adding them up doesn't preserve the individual numbers that could be decoded into letters. The final sum contains no information that corresponds to the original text9.

The reason this process can't be reversed if all you know is the final number, 1,085, is that there are many possible phrases that might have produced that number through a similar process. Your only hope of reconstructing the original phrase—assuming you knew the process I used—would be to try every possible combination of numbers that could add up to 1,085 and see which ones decoded to meaningful English phrases. Let me know how that goes.

Computers use one-way functions for various purposes, like validating the integrity of data or confirming whether two passwords match—purposes where reconstructing the original data isn't important, but comparing it quickly is. For example, if I sent you the phrase Hello World! and you responded that the bits in it added up to 1,085, I would have confidence that you received the same message I sent. Using one-way functions to check that data has been transmitted intact are very common in networking, storage, and many other computer applications10.

How would one-way functions work as an analogy for the way brains create minds? Well, it could be that when the brain takes in sensory input, it transforms that input into a unique language for its internal use, one that can't be decoded. It may be that once the brain translates data into its own custom format, it discards anything it doesn't need. Many audio and video file formats use "perceptual" compression like this to save space by discarding data that our brains don't need in order to enjoy a movie or song.

Furthermore, at least as far as we know, the brain doesn't ever need to reproduce the exact stimuli that went into it. It never has to project a scene that the eyes took in, or reproduce a smell that the nose detected. Even repeating words or sounds we heard doesn't imply the brain is reversing previously received signals, because our brain uses two different systems for understanding and creating language11.

Unfortunately, one-way functions would be a disappointing analogy for brain operation in some ways, because it would mean that pinning down how the brain transforms information it takes in might be very difficult, if not impossible. Once information went into a brain, there would be no way to reverse the process and figure out what the original information was.

Encryption

One final metaphor that might be useful to consider is encryption. Like a one-way function, encryption performs a mathematical operation on data, altering it into something unrecognizable. Unlike one-way functions, all of the information of the original input is preserved in the new form, and the process is reversible—if you know the secret.

In cryptography, the secret (or "key") is an additional piece of data used by the encryption algorithm to transform the data in a predictable way. In general, the encryption process itself is not kept secret, only the key.

Basic concept of encryption

For example, let's say I want to encrypt the ACSII capital letter H. The ASCII code for H in decimal notation is 72. For my key, I'll pick another number, 77. My encryption algorithm will be simple multiplication: 72 × 77 = 5,544. Given the ciphertext 5,544, how easy is it to reverse-engineer my original data?

If you know that the key is 77, it's trivial to decrypt 5,544 back to 72 by using division: 5,544 ÷ 77 = 72. If you don't know the key, but you do know the method I used (multiplication), then all you need to do is look at all the numbers that can be multiplied together to make 5,544—its factors—and figure out which one it is: a × b = 5,544. Easy, right?

Well... it turns out there are a lot of ways to multiply numbers to get 5,544. The number 5,544 has 48 factors, or numbers that it can be evenly divided into:

1, 2, 3, 4, 6, 7, 8, 9, 11, 12, 14, 18, 21, 22, 24, 28, 33, 36, 42, 44, 56, 63, 66, 72, 77, 84, 88, 99, 126, 132, 154, 168, 198, 231, 252, 264, 308, 396, 462, 504, 616, 693, 792, 924, 1386, 1848, 2772, 5544
The factors of 5,544

Many of those factors also correspond to ASCII characters. If you suspected that my original plaintext was an ASCII character, you could look only at the numbers that correspond to ASCII characters. Unfortunately, it would be easy to guess wrong: you might guess that the key was 72 and the original data 77, giving you the ASCII code for the capital letter M!

Of course, real encryption schemes are much more complex than this. For example, I used an open-source encryption program12 called gpg to encrypt a two-byte file consisting of just two ASCII characters: H and 0a, the Line Feed character. I used symmetric AES256, currently the gold standard of symmetric encryption, and this is the resulting 80-byte ciphertext (in hexadecimal):

8c0d 0409 0302 6d38 0fbc 4503 35ee ffd2 
3f01 0a64 5e0d fa35 c358 44ec fe22 d96e
538a 7906 7406 c90b 8c78 fa33 4355 d03f
362c e197 289e 4f9b 1bce 43a9 a435 d474
d226 816c d861 a8eb 2908 db1b 110c 5599

Currently, AES256 is considered unbreakable—there's no known way to discover the key and use it to decrypt the original plaintext message. Of course, since I provided the exact plaintext message, it might be possible to figure out the key—an exercise I leave to the reader.

Both of these examples use what's called symmetric encryption, meaning that the same key can be used to encrypt and decrypt the message. However, in the 1970s a new type of cryptography was developed called asymmetric encryption, also known as "public-key" encryption. In asymmetric encryption, a pair of related keys are generated. One key—the public key—can be used to encrypt data that only the other key—the private key—can decrypt.

We use this type of encryption all the time on the Internet—you're using it right now to read this blog post, even if you don't know it! When you use HTTPS to connect to a website, the server sends your computer a certificate that includes a public key. Your computer uses this public key to encrypt messages that only the server can decrypt with its private key. The two computers then negotiate a unique symmetric key for the session, allowing traffic in both directions to be encrypted in transit and only decrypted by the two computers at either end.

Public/private key pairs can be used for many other purposes as well. For example, a private key can be used to create a digital signature that can be verified using the public key. The certificates provided by the server at the start of an HTTPS session are digitally signed by Certificate Authorities, trusted third-party organizations that your computer has public keys for so it can verify that the certificate is legitimate.

Does encryption offer a better metaphor for the mind/brain relationship? Well, if the brain works like an encryption/decryption system, there may eventually be ways to figure out the "keys" it uses.

Encryption offers a more interesting metaphor for our brains than one-way functions because, after all, some part of our brains have to be able to make sense of the information for it to be of any value; if our senses encrypt information as it goes into our brain, other parts of our brain have to be able to decrypt it. And if our brain can decrypt the contents of our minds, that implies we could someday use technology to decrypt them too, right?

Our Encrypted Minds

Ask a friend to pick a random word and tell it to you. You can repeat that word back to them, but even if you repeat it as fast as you can, there's an unavoidable delay while one system in your brain processes the incoming word and passes it to the system which tells your muscles how to repeat it. In between the hearing and the repetition, the word still exists somewhere, somehow, in your brain.

It turns out that two different parts of your brain process language on its way in and language on its way out. Wernicke's Area encrypted your friend's word when you hear it, and other systems in your brain can decrypt it. One of those systems is Broca's Area, which is the part of our brain that creates language. Even if some systems can only encrypt and others can only decrypt, the point remains—where was that information about that word between the time you heard it, and the time you repeated it?

Unlike the encryption standards used by computers, the brain's system of encryption is under no obligation to stay static. In fact, it could be constantly changing, modifying itself as we learn and experience the world. We have to learn to understand a language and how to speak it, just as we have to learn how to read and how to write. Just because you've seen a beautiful sunset doesn't mean you can reproduce it in watercolors, and just because you've heard a song doesn't mean you can hum the tune.

Have you ever heard a sound you couldn't identify? Seen an optical illusion or mirage you couldn't immediately make sense of? Read a sentence and had to go back because it didn't make any sense? These reflect your brain's various encryption systems trying to encrypt information from the outside world into your brain's own internal language—trying to assign meaning to raw sensory input, and failing for whatever reason.

Have you ever had an idea that you struggled to express? A song you could hear in your head, but when you try to play it, it just doesn't sound like it should? A picture you wanted to draw, but it never comes out right? A sentence you wanted to write, but the words just never fit together? These are your brain's decryption systems trying to decrypt something out of your brain's internal language, and failing for whatever reason.

Our brain systems developed only to talk to each other. Although it does seem possible to localize certain functions in certain areas, we are far from understanding all of the complex interconnections in our brains. If our brains are encrypted, the secret key probably won't be a long number. The key might be as complex as the entire neural network.

What would this mean for the distinction between the brain and the mind? For one thing, an encrypted mind may be very hard to decrypt. Our brains are far, far more complex than even the most sophisticated digital computer. Each brain's encryption may be unique to that individual. We may eventually learn how to build machines that encrypt digital data into our brains and decrypt our thoughts back out, or we may not.

Mental Prosthetics

Even if we aren't able to decrypt our brains, we might still be able to create many practical technological interfaces to the brain by taking advantage of one of the brain's best features—its own ability to modify itself, also known as neuroplasticity13.

Researchers have been demonstrating remarkable successes with prosthetics in recent years, creating devices that can help people see and hear14. Prosthetic limbs are being made that can be controlled by and provide feedback directly to the brain15. Many of these successes are owed to the brain's remarkable adaptability; the brain rewires itself to adapt to the new signals as the person learns how to use the prosthetic.

Learning to use a prosthetic is probably a very similar process to the ways new humans learn to use their senses and muscles for the first time. It takes awhile, but eventually most of us get very good at it. If our brains work like the metaphors of encryption or one-way functions, this could mean that integrating technology into mature brains could as difficult as an adult amputee learning to use a new prosthetic limb. But infants might learn to use prosthetic neural interfaces implanted at birth the same way they develop other skills. To a child who grew up with a mental prosthetic, being able to interface directly with computers might seem as normal as breathing.

Conclusion

If anything, the brain's operation is vastly more complex than these data transformation metaphors can capture. It may be a long time—or maybe never—before we can compltely quantify how our brains work in an objective sense. And even if we knew it was possible, in principle, to decrypt a brain, would that eliminate the existence of a subjective experience, or just change it?

Think about that word that you heard and repeated. Just by the act of recalling it, you're processing information that may be impossible to retrieve any other way. Like any other qualia you can bring to mind—a taste, a color, a memory, a feeling—you're experiencing something that's uniquely yours. Even if you have complete faith that the experience is created by electrochemical operations in your physical brain, the fact is that we know of no other way to decrypt that information. We may never have another way.

The good news is, we can spend as much of our lives as we want to studying our own subjective existence. We don't need science to validate that our experiences are real in the objective sense. Whatever their origins, our experiences are real because we experience them15.

Encryption might provide a useful analogy for explaining how our brains can contain the qualia of our subjective minds, while at the same time our minds are completely inaccessible to anyone but ourselves. For the time being—and maybe forever—our minds are our best and only tool for examining, understanding, and experiencing our subjective experience.


References

  1. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2776484/
  2. https://pubmed.ncbi.nlm.nih.gov/12543266/
  3. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3326357/
  4. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4852309/
  5. https://www.nature.com/articles/nrneurol.2017.122
  6. A typical resolution for fMRI scanners has voxels of 3 mm³, while "high resolution" fMRI techniques can achieve sub-millimeter resolution in humans and up to 50 micrometers (μm) in animals. This may seem small, but the average human brain is only 140 mm wide, 167 mm long, and 93 mm high. The nucleus of a neuron is only 3–18 μm across, meaning the technique is not isolating individual neurons. The higher resolutions also come with additional challenges, such as a higher signal-to-noise ratio and increased sensitivity to the slightest movements of the head. See https://www.frontiersin.org/articles/10.3389/fncom.2016.00066/full for more information.
  7. See computed tomography (CT), positron emission tomography (PET), single-photon-emission computed tomography (SPECT), electroencephalography (EEG), and magnetoencephalography (MEG) for examples.
  8. https://www.ascii-code.com/
  9. The number 1,085 doesn't even correspond to an ASCII character, because ASCII doesn't have that many characters. In the UTF-8 character encoding scheme, it corresponds to the Cyrillic letter н—but that doesn't help you at all. The original information is gone, completely transformed into something else, with no way of changing it back.
  10. One-way functions include hashing, checksums, and fingerprinting, among others.
  11. https://en.wikipedia.org/wiki/Neurolinguistics
  12. https://gnupg.org/documentation/manpage.html
  13. https://en.wikipedia.org/wiki/Neuroplasticity
  14. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4329712/
    https://www.nidcd.nih.gov/health/cochlear-implants
  15. Or as Descartes memorably put it, cogito ergo sum—I think, therefore I am.


© 2020 Craig A. Butler
First Posted: 9th Jun 2020
Last Updated: 23rd Nov 2020