History of AI
AI. When you think of it, you imagine looking in the mirror and smiling at yourself, except this hologram talks in tech jargon and plans ahead thousands more moves than any human could ever hope to do. Essentially, the ultimate mission of AI, or artificial intelligence, is to create machines that are able to think and behave like people. Except while people are chained by physical and mental limitations, AI is limitless.
Believe it or not, the idea of artificial intelligence has always been etched in the human subconscious. In ye olden times, myths were passed between circles of philosophers about human advancements that would change what it means to be human. “What separates us from beasts?” has been a question that has haunted our psyche ever since our conception. In the 1700s, the idea of machines having human capabilities began to become prominent among circles of wealthy European thinkers as the onset of the Industrial Revolution accelerated.
Fast forward past World War II, the culmination of all of the 20th century’s most innovative technological advancements wreaking havoc among 85 million casualties, and a generation of intellectuals became conscious of how powerful machines could be. Alan Turing was one young British polymath who made the mainstream public aware for the first time of AI. He asked why machines couldn’t make decisions independently based on logic and reason like human beings. You can read his 1950 paper summarizing his proposition here. Many AI experts recognize it as the formal script setting the concept in stone.
Despite Turing’s genius, computers needed to first develop past their Stone Age capabilities in order to activate an AI revolution. In the 1940s, computers couldn’t store commands; they could simply execute them. Not only were they uber slow, leasing a computer costed around $200,000 per month! So advocates needed solid backing by a well endowed institution, as well as a solid proof of concept, in order to bring ideas to fruition.
In 1955, the proof of concept blossomed in what is known as the world’s first AI program, Logic Theorist. Invented by Herbert Simon, Cliff Shaw, and Allen Newell, it had the capability to imitate the reasoning skills of a person through proving mathematical theorems. It was presented at the infamous Dartmouth Summer Research Project on Artificial Intelligence conference in 1956. Even though the conference struggled with organization, its impact cannot be understated, since it set the tone for AI research during the next two decades.
You may be thinking to yourself, “But that’s so far away in the future. How is this possibly affecting me right now?” Whether you realize it or not, AI surrounds us. Remember that app called Grammarly that you use to check the spelling whenever an essay is close to being due? That app utilizes NLP, or a form of AI that is used to analyze and interpret language called natural language processing. Because grammar inherently has so many patterns, it is easy for the computer to quickly learn using the training data samples provided and tweak itself to improve the next time it encounters run-on sentences.
But wait. What the heck is training data? Basically it’s a sample of data that’s used to help AI to learn and respond to different situations. So using the example above, a programmer would train their algorithm using various cases where run-on sentences occur in an essay. Then, when the technology is actually launched into the world, it can tackle those pesky run-on sentences with ease!
At this point, you’re probably shaking your head and asking yourself, “Why care about correcting a dumb sentence?” But wait! Whenever Discord users integrate bots into group servers, artificial intelligence is being plugged in each and every single time. See, a Discord bot is an AI that can perform any number of useful automated tasks that enhance the user experience. The tasks include moderating content, welcoming new members, and banning rule breakers. If some tunes are needed to pump up everyone’s adrenaline while you’re all playing League of Legends, bots can also do that too. They help admins manage and users navigate the server. There’s this really cool one called Dank Memer which can be used to roast any random person. But anyway.
I’m sure we can all relate to looking at cute memes during a time of great distress. So what happens when you come across an adorable picture of a King Charles Cavalier Spaniel and are scratching your head while asking yourself, “What breed is that doge?” This is a real problem, is it not?
Not to worry, Computer Vision is here to help! It’s a subset of deep learning, or the computer’s use of an extensive hierarchy of layers to learn complex features by themselves instead of being given specific instructions by programmers. Deep learning is what makes AI truly limitless, as the algorithm can learn things naturally comparable to a human. Within deep learning, computer vision processes the image through specialized algorithms that read the features of an image and determine what those features represent in order to form a bigger picture of what the image is. So after Google Photos scans the picture above, it will be able to determine the dog’s breed is a King Charles Cavalier Spaniel.
Okay, so we’ve learned about AI. But what about machine learning, that term which basically holds the same meaning? Good question. So artificial intelligence is the more general category of devices acting smart, including maneuvering a self driving car or automatically trading stock shares. It encompasses machine learning, the more specific term where devices make decisions by themselves. ML has skyrocketed due to two factors: that it would be easier to program tasks if computers were able to think for themselves and the ease of which data became available on the Internet. It’s in a lot of the systems we use today, precipitating the iconic chess game where a computer won against the best chess player in the world.
So You Wanna Play a Game?
Let me tell you the story. Garry Kasparov was just your average dude who loved chess. He also happened to be the best player in the world in 1999. At the time, IBM was testing the capabilities of its supercomputer, Deep Blue. They wanted to know if it could defeat a human at chess. So they sent Deep Blue off to battle against Garry. The resulting battle was watched by programmers, chess players, scientists, and a whole bunch of excited citizens who impatiently awaited the outcome with hope and skepticism in their hearts. Kasparov did his best to formulate a grandiose strategy against the computer. However, Deep Blue’s unemotional efficiency proved to be an advantage. After a long, arduous game, Deep Blue came up on top. The epic battle kicked off a new era — the age of man vs. machine had begun!
Examining the past relationship between humans and machines, it is clear that it’s now different from what things used to be. During the ancient times, tools such as horse-and-buggy carts were used to pull heavy loads so that humans didn’t have to. Life was brutal growing up during a time when there was little medicine, food security, or convenience. People made do with what they had. Then, when the Industrial Revolution occurred, humans were thrusted into a new era where no longer was the machine working in tandem with the worker — they were competing against it. Thanks to more powerful capabilities and rapid efficiency, factories started replacing traditional cottage industries. The ones who most felt the effect were those at the bottom, who simply didn’t have the resources to compete with companies who could produce a greater output at a cheaper price. Continuing to the modern era, the phenomenon of technology disrupting the Great Order of society manifests in monopolies such as Google and Amazon, who dominate the web and delivery spaces.
New questions are being raised every day about the effect that technology will have on our lives going forward. In the meantime, computers have been able to quickly master challenging games through exciting new advances. Case in point: when Google DeepMind’s “AlphaGo” program beat the reigning world champion Lee Sedol at the Chinese game of Go back in 2016. Some may cheer in awe of the sheer potential that awaits us. However, AI is not all fun and games. There are real risks that the powerful technology poses, despite providing tangible benefits to humanity. Visions of software-enhanced holographic armour may pop into your head like a costume straight out of a video game, but I assure you that these dangers are indeed present in our world today. Among the dangers is hacking cybersecurity networks. AI is able to execute voice or visual recognition features that serve as the keystone of many privacy processes today. Because AI-driven software models can adapt so quickly, they are a threat to the identity information of billions of people around the globe. Hackers can even code programs that allow data breaches to remain undetected, leaving a devastating impact on the people whose data they steal.
In July 2020, Twitter proved to be one of the largest corporations affected by a data breach as scam filled messages were sent from influential accounts straight to individual users. The company has also been the subject of questions over its respect of data privacy. Its algorithms have been claimed to be biased against viewpoints which the company opposes in the past by numerous individuals who have been banned from the service. While the question remains whether these bannings were justified or not, the fact is that not only Twitter is breaching freedoms. Big Tech such as Google, Amazon, and Apple all collect data on you every day.
There’s a reason why tech companies provide you all of their services for free. Every day, when you post a memory via Facebook, order a package on Amazon, or search the web on Apple, your data is being collected secretly. The data is combined to build character profiles about you based on the preferences noticed. Advertisers use the data the platforms have gathered in exchange for paying them. So essentially, those Amazon ads for cosplay that you see when browsing the web aren’t a creepy coincidence. They’re there because you’re being tracked.
Want proof? Since 2010, Facebook’s advertising revenue has grown 3,600%, although their user base increased by only 310%. The company who made the biggest ad revenue in 2019 was Facebook, with an astounding $27 profit made per user. However, Google was slightly ahead, earning $67 per user in that year. The statistics are undeniable. They communicate the state of affairs concerning privacy in our world right now.
If you’re still wondering why this is a problem, it’s because you most likely haven’t been exposed to an actual surveillance state. Simply put, we live in a mostly westernized society whose myths dance around in dystopian films. Unfortunately, these systems are transitioning into the United States, and very rapidly, as our government buys up private sector databases in order to intercept communications.
There are two main threats presented by technology surveillance, as explained by Harvard Law Review columnist Neil M. Richards. If you are being watched all the time by a central authority figure — think your father, teacher, or principal — would you dare speak a contrary opinion? The answer is: not likely. To develop a truly honest, genuine view of the world, we as humans require time alone to think and reflect on our experiences. Secondly, the surveillance dynamic promotes a highly unequal power dynamic between the watcher and the watched. People who are surveilled are at risk of discrimination, coercion, and selective enforcement, all toxic actions that an abuser inflicts upon their victim. In other countries, people are often prosecuted or blackmailed simply for expressing a different opinion from the status quo.
While we are not at such an extreme level, the centralized path that we are heading down inevitably will give one group a large amount of control over the majority of people. Now I’m going to be honest. I don’t think the way we have set up the exchange of information is going to guarantee personal security in the future.
Bridging the Gap
The 3 main issues I see that are present within our current trust in enormous corporations are greed, bias, and control. Let me explain. The heads of large corporations are not incentivized to look out for the best interests of the consumer save for maintaining a smooth operating system. They want to make money, which is the primary goal of a business. So they will not be stopping the practice of selling their customer’s data — your data — anytime soon. It will be shipped off to unknown entities: perhaps foreign governments or maybe even other multinational tech firms. Nobody can say what exactly will happen with your private information. Perhaps it will be collected by an official in Kazakhstan or used by a company selling server hardware in Kansas. Either way, your data was sent to these entities without your permission. It’s like saying, “I won’t tell a stranger who I am,” and then donating everything to Google.
Secondly, tech companies have a virtual monopoly over the algorithms that determine the content that social media viewers see every day. Their centralized nature clashes directly with the natural way that we as humans learn and develop opinions about issues that really matter. When you are scrolling through posts, you are seeing content recommended to you based on your tastes. Unfortunately, the real world doesn’t cater to you alone. There are a multitude of ideas out there, some of which may clash with personal interests. However, they still do exist and may pose a very real benefit to your perception of the truth. In the words of the great financial entrepreneur Ray Dalio, “Because it is difficult to see oneself objectively, you need to rely on the input of others and the whole body of evidence.” His principle is a very real truth that still applies to the modern world. Algorithms subconsciously present biased source material, which has led to factions of people who are more divided in their opinions than ever before. Algorithms have indirectly caused the very real discord that is currently plaguing our society and they will have to be improved if we want to have any chance of achieving peace IRL.
Last but not least are the unparalleled lies that spread like wildfire through the influence of the aforementioned algorithms. It is very hard to verify what is real and what is not. Every day, billions of Internet users stumble upon content that has been posted by someone else, but is not guaranteed to reflect reality. The content is recycled continuously. Ever ask how conspiracy theories become so popular? It’s because the user sees one post about it, taps with interest, and goes down a sea of recommended posts about the same inaccurate subject. And where there’s a limit to the emotions you can politely express to other people, the Internet has a swarm of anger, humiliation, and resentment on it during any single moment. As a result, you end up being blasted with more negative content than positive content during time spent online, feeling a greater amount of despair than you deserve after screens are turned off.
Considering the immense amount of responsibility that the gatekeepers to AI will have, it is necessary that there will have to be a way to keep its power in the hands of everyone, not just a single entity. The consequences otherwise will be disastrous. Taking into account the past atrocities resulting from human nature and amplified with the smooth prowess of technology, the decisions we choose to take going forward will either make or break our order as a species. Especially since when presented with the choice, most people will protect themselves instead of sticking their neck out for the whole. Many know this quality as self preservation.
Self-preservation is not good or bad standing by itself, it is simply an expression of our need for survival. But this becomes a problem when you have a central authority standing over everything. Who will be brave enough to speak up and hold that authority accountable? Chances are, almost nobody. See the threat yet? The central authority will be able to abuse their power using whatever means are available. They could steal away your job, family, and even the right to walk freely, and get away with it. The possibility of this happening in the very near future, using a technology that possesses massive capability over our collective existence, does not present a good future for humankind.
So we need a system that prevents power from being centralized in the hands of a select few while still enabling the same functionality that users so enjoy. And thus decentralization is what we need to keep our dignity as individuals. Decentralization is the only way that we will be free to express our opinions as individuals while preserving the transparency, inclusion, and creativity that is so treasured within the Internet. Ultimately, we don’t know what this is going to look like in the future. It may be built upon a backbone as exotic as quantum machine learning or a blockchain infrastructure. No matter the case, one thing is clear. We need a group of determined, driven individuals who stand by their values and are willing to take action in order to shape a freer, more just society.
To push societal values that lean towards democratization in AI, we need rules that set clear boundaries for what is and is not allowed. A major obstacle to achieving our goal are the outdated current laws and legislation that provide no real framework for regulation. The existing ones are either permissive or completely ineffective. Take the Electronics Communications Privacy Act. It was originally passed in 1986 and gave the U.S. government permission to view digital communications, as well as GPS tracking via cell phones. There’s also the Children’s Online Privacy Protection Act, passed in 2000 to determine if a website is appropriate for anyone under 13. The fact that America’s privacy laws mostly range from two to four decades old should be a cause of concern for any citizen. After all, with the exponential rate that technology is developing, how can legislation keep up with the clear need for security? New laws will have to be passed for sure.
Because humanity has entered such a new realm, we’ll first have to look at what AI is exactly in order to shape laws that make sense. Most AI projects today have a model and a large dataset used to train the model overlooked by data scientists who monitor the model’s progress and make the appropriate changes to improve its accuracy. A central group manages every step of the process. But in reality, value creation for providers does not necessarily correlate to value creation for consumers. In fact, a centralized structure actually stagnates progress to the point where consumers receive less value, as you can see on the graph below.
That’s because when people are provided an environment where they can collaborate freely, the exchange of ideas flourishes. An open system works in direct contrast to top-down leadership. Instead, the testing and optimization of the algorithm will be federated across multiple participants. However, an open system also means that the ideas exchanged won’t necessarily vibe with the status quo. In the long run, a democratized environment may even pose a threat to the status quo.
Before we immediately decide to jump the gun from centralized AI to decentralized AI, there are a couple of questions we’ll have to ask ourselves first, mainly relating to privacy, influence, economics, and transparency.
- Privacy: can groups train a model using datasets without revealing data publicly?
- Influence: can groups influence the behavior of a model that is numerically significant?
- Economics: can people outside of the data scientists be rewarded for contributing to the knowledge and capabilities of the model?
- Transparency: can the model’s behavior be accessible to all parties without centralization?
I’ll leave it up to you to wonder about what the future will look like. But these core questions are very important because they’ll shape how laws are passed in the realm of technology. And furthermore, decentralization can be incorporated with every single one of them. Privacy favors homomorphic encryption, where certain types of computation is carried out on cipher-text that produces a result which is also in cipher-text. Homomorphic encryption presents a massive advantage in that one party could combine encrypted numbers together and another party could decrypt them without either party being able to figure out the value of the numbers. Bam! Your password is now permanently forbidden from being looked at by Google. Homomorphic encryption could even allow participants to anonymously contribute to datasets, thus reducing the likelihood that the final AI algorithm will discriminate against minorities or people who don’t speak English as their first language.
Blockchain — my passion! It’s the promising backbone of a decentralized, machine-driven economy. The way blockchain works is fundamentally aligned with the principles of a true decentralized AI application. In a nutshell, it is a database where blocks of data are chained together. When information is being sent from one party to another, typically both parties have to reach a mutual consensus, or agreement on what is going on. Otherwise, the situation will become disorganized and data may be mismanaged. Blockchain tidies up a potential hassle through its system of smart contracts, or computer protocols designed to execute once the terms of the buyer or seller have been met. Once information is sent out from the sender, it is verified by the smart contract and then added onto the ledger, or the past record of all previous transactions on that blockchain. Blockchains are typically associated with cryptocurrency, but lately have been used in the emerging market of Dapps. Dapps are a major breakthrough, being applications that are completely decentralized. Because blockchain is being used for so many diverse purposes with the infrastructure and security necessary in order to facilitate solid interactions, it could be the Next Big Thing to drive a market breakthrough in decentralized AI.
Now, you may be thinking to yourself, “All of these sound super dope, but why should I care if there’s the possibility that the AI’s results are going to be biased anyway?” Good question, that’s just what federated learning solves. Federated learning is a system that enables AI to run across widely distributed platforms, such as mobile or IoT systems. It was founded by Google research labs, the initial idea being that a shared model will be trained under a central server through a federation of participating devices. So each of the devices will be able to contribute data to the model while keeping the data in the central server. It is a solid alternative to simply running AI infrastructure from a centralized model.
Decentralization may not solve all problems with AI. In fact, it may lead to some more puzzles of its own. No matter the challenges that are bound to arise, it’s worth pursuing a system where everyone’s voice is valued. It’s time to lay the foundation for a system that guarantees greater equality — not only for us — but for the future generations who will inherit this world after we pass on.