Tools and Weapons
Technological advancement is a powerful thing. It can also be a scary one. Like when NSA employee Edward Snowden downloaded 1.5 million classified NSA documents, he left his job in Hawaii, and headed to Hong Kong. Later, he revealed the classified documents to journalists and the public discovered the NSA and the British Government had been copying user information from Yahoo and many other sources. One such program PRISM was an agreement between NASA and various companies where they shared private user data. The release of this information lead to citizens questioning how their data was being used by major tech companies, and more importantly if major tech companies could be trusted. In Tools and Weapons by Brad Smith and Carol Ann Brown, significant events in tech are analyzed, and lessons are shared: revealing that the relationship between user data and who can see it is actually a bit more complex than it may seem... especially when the government gets involved.
Customer approval seems ideal for determining whether or not to share their data, but there is a gray area.
Theoretically, customer data should not be shared without the legal process. If a person doesn’t agree to share their data, no one should have access to it, right?
Unfortunately, it's pretty hard to get legal consent from someone if you can’t find them, especially when someone’s life is on the line. For instance, when Wall Street Journal reporter Daniel Pearl was kidnapped by a group of terrorists in Pakistan, the only way to save him was to find him. The terrorists communicated with the US government through wifi hotspots throughout Pakistan. Pearl was killed before they were caught, but the terrorists were caught. They were found using web-based tracking. Was using data to track them ok, even though they didn’t receive explicit consent?
In this instance, yes, because it meant life could be saved. The point? All data tracking and sharing aren’t wrong. Even Microsoft thinks so: this chase led them to scale their ideas on customer privacy. They determined that when faced with an issue that could violate a user’s privacy, they would follow these principles: Privacy, security, transparency, and compliance.
The government shouldn’t use their powers to gain access to citizen’s personal data, and tech companies need to be protected.
The government began pursuing information from tech companies while trying to track terrorists, a noble deed, but it devolved into forcing tech companies to give them private information about American Citizens, and invoking “gagging orders”. These “gagging orders” were laws preventing the companies from disclosing that they were being hacked by the government. Eventually, Microsoft sued the government over this heinous breach of privacy. This lead to the Department of Justice ruling in favor of tech companies. Microsoft met with the Department of Justice, during which limits were set on gagging orders. This was one of the first steps taken to monitor the use of people’s data.
Each country needs to be treated differently when it comes to access to its citizen’s data.
Microsoft also had to determine how to decide how their rules about data access should apply to each country they worked with. After hearing speeches from people like 75-year-old Hans-Jochen Scheidler who was imprisoned in Germany after he was determined by spies to be handing out literature which criticized the regime, it became apparent that different countries needed different rules. It was determined that countries with troubling human rights records would not be given access to citizens’ information. Period. Those with questionable records could have access to citizens’ business data, but nothing else.
Tech companies must be free to adapt to cyber attacks.
When Patrick Ward went in for heart surgery, to his surprise, the doctor couldn’t operate on him. This was because St. Bartholomew’s hospital had been hacked. Each computer displayed a message that the user must pay $300 or they would lose access to their information forever. This is a pretty big deal, especially in a hospital. The instance illustrates the power of Cyberweapons, which continuously advance. Technological companies need to capabilities to adapt to the threat of cyber weapons and cyber attacks
Governments can use social media to sway voters - reminding us to be careful about trusting what we see on the internet
Political activism doesn’t always start on the streets. Now, more than ever, it starts on the internet. Especially on social media. Governments’ know this and they can use this knowledge to stir up problems. During the 2016 election, Russian operatives from the IRA (Internet Research Agency) leaked fake stories about Hillary Clinton, including accusations of her being a pedophile or in ill health. These rumors were only shared to certain circles: increasing polarity between those who supported her and those who did not. They also managed to organize both a counter Trump and a pro-Trump protest in Houston Texas at the same time. Our government and corporations are not the only threat to internet privacy: foreign governments are also interested in planting information and stirring up trouble.
AI is not necessarily something to fear: but it can reflect the biases of its engineers.
People are often concerned with AI. They worry that the development of artificial intelligence will lead to undefeatable machines with superior intelligence and their own agenda. In fact, currently the greatest threat AI poses is a reflection of the bias of its creators. An article was published with the headline “There’s Software Being Used Across the Country to Predict Criminals. And It’s Biased Against Blacks”, and it highlighted an important point in the development of software: checks need to happen to prevent bias from leaking in. The software being developed reflected the biases of those who created it. It also reflected a certain bias because of the facial familiarity feature of the software. The solution? Making sure there was a more diverse group of engineers present to check the software. Engineers who had faced any version of oppression are especially mindful of these blunders and can be an aid to getting rid of them.
Technology isn’t all bad.
It's important to remember that technology is also often used for good. It can be used to decode ancient texts. Marina Rustow at Princeton uses technology to help her decode 4,000 documents from Cairo’s Ben Ezra Synagogue. Quite a formidable task: but with the help of technology, it is possible. There’s also an algorithm that predicts poaching behavior, and helps park rangers in Uganda ward off poachers, protecting animals. These examples show that technological advances can have many positive uses: even for our furry friends.
The responsibility for the safe use of technology should not rest fully on tech companies or governmental bodies. Both are responsible.
A successful relationship between technology, government, and humans is an incredibly complex one without any simplistic solutions. However, Tech companies need to understand that they cannot be indifferent to their clients and how their technology is used. There needs to be a consideration of how their product is being used, beyond consumerism. Responsibility also lies with the government: they need to understand and regulate the technological products that are being created and sold, while also not issuing orders demanding information from tech companies.
So, while some tech advances are exciting (like technology that wards off poachers) other aspects of tech can be scary (like foreign governments planting stories on social media) and some can be downright terrifying (like the government forcing companies to give them our data) the most important way we can face the issue is with information and knowledge. Like the AI that becomes less scary when we know the primary issue is with the creators, not the technology, understanding is essential. One key role in understanding? Information. It's a powerful thing. And how do you get that? Why…. reading summaries like this one!
About the Author
Brad Smith is Microsoft’s president. He has done work there to help the company work through critical issues including the intersection of technology and society, including cybersecurity, privacy, AI, human rights, immigration, and environmental sustainability. He has been named as one of the technology industry’s most respected figures. The New York Times called him “a de facto ambassador for the technology industry at large."
Technological advancement is a powerful thing. It can also be a scary one. In Tools and Weapons by Brad Smith and Carol Ann Brown, significant events in tech are analyzed, and lessons are shared: revealing that the relationship between user data and who can see it is actually a bit more complex than it may seem... especially when the government gets involved. This book covers: why customer approval is not always the deciding factor in whether or not their data will be accessed, the event which illustrates why the government shouldn’t use their powers to gain access to citizen’s personal data, and how tech companies need to be protected. Discover why each country needs to be treated differently when it comes to access to its citizen’s data. Learn about how AI is not necessarily something to fear: but it can reflect the biases of its engineers. This book also addresses a difficult question about who exactly is responsible for protecting individual’s private data after it is gathered by technological companies: if that responsibility lies with the government or the company who collects the information (hint: it's both).