This is a cross post from the Official Google Blog
Two and a half years ago, we outlined our approach to removing content from Google products and services. Our process hasn’t changed since then, but our recent decision to stop censoring search on Google.cn has raised new questions about when we remove content, and how we respond to censorship demands by governments. So we figured it was time for a refresher.
Censorship of the web is a growing problem. According to the Open Net Initiative, the number of governments that censor has grown from about four in 2002 to over 40 today. In fact, some governments are now blocking content before it even reaches their citizens. Even benign intentions can result in the spectre of real censorship. Repressive regimes are building firewalls and cracking down on dissent online -- dealing harshly with anyone who breaks the rules.
Increased government censorship of the web is undoubtedly driven by the fact that record numbers of people now have access to the Internet, and that they are creating more content than ever before. For example, over 24 hours of video are uploaded to YouTube every minute of every day. This creates big challenges for governments used to controlling traditional print and broadcast media. While everyone agrees that there are limits to what information should be available online -- for example child pornography -- many of the new government restrictions we are seeing today not only strike at the heart of an open Internet but also violate Article 19 of the Universal Declaration of Human Rights, which states that: “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”
We see these attempts at control in many ways. China is the most polarising example, but it is not the only one. Google products -- from search and Blogger to YouTube and Google Docs -- have been blocked in 25 of the 100 countries where we offer our services. In addition, we regularly receive government requests to restrict or remove content from our properties. When we receive those requests, we examine them to closely to ensure they comply with the law, and if we think they’re overly broad, we attempt to narrow them down. Where possible, we are also transparent with our users about what content we have been required to block or remove so they understand that they may not be getting the full picture.
On our own services, we deal with controversial content in different ways, depending on the product. As a starting point, we distinguish between search (where we are simply linking to other web pages), the content we host, and ads. In a nutshell, here is our approach:
Search is the least restrictive of all our services, because search results are a reflection of the content of the web. We do not remove content from search globally except in narrow circumstances, like child pornography, certain links to copyrighted material, spam, malware, and results that contain sensitive personal information like credit card numbers. Specifically, we don’t want to engage in political censorship. This is especially true in countries like China and Vietnam that do not have democratic processes through which citizens can challenge censorship mandates. We carefully evaluate whether or not to establish a physical presence in countries where political censorship is likely to happen.
Some democratically-elected governments in Europe and elsewhere do have national laws that prohibit certain types of content. Our policy is to comply with the laws of these democratic governments -- for example, those that make pro-Nazi material illegal in Germany and France -- and remove search results from only our local search engine (for example, www.google.de in Germany). We also comply with youth protection laws in countries like Germany by removing links to certain material that is deemed inappropriate for children or by enabling Safe Search by default, as we do in Korea. Whenever we do remove content, we display a message for our users that X number of results have been removed to comply with local law and we also report those removals to chillingeffects.org, a project run by the Berkman Center for Internet and Society, which tracks online restrictions on speech.
Platforms that host content like Blogger, YouTube, and Picasa Web Albums have content policies that outline what is, and is not, permissible on those sites. A good example of content we do not allow is hate speech. Our enforcement of these policies results in the removal of more content from our hosted content platforms than we remove from Google Search. Blogger, as a pure platform for expression, is among the most open of our services, allowing for example legal pornography, as long as it complies with the Blogger Content Policy. YouTube, as a community intended to permit sharing, comments, and other user-to-user interactions, has its Community Guidelines that define its own rules of the road. For example, pornography is absolutely not allowed on YouTube.
We try to make it as easy as possible for users to flag content that violates our policies. Here’s a video explaining how flagging works on YouTube. We review flagged content across all our products 24 hours a day, seven days a week to remove offending content from our sites. And if there are local laws where we do business that prohibit content that would otherwise be allowed, we restrict access to that content only in the country that prohibits it. For example, in Turkey, videos that insult the founder of modern Turkey, Mustafa Ataturk, are illegal. Two years ago, we were notified of such content on YouTube and blocked those videos in Turkey that violated local law. A Turkish court subsequently demanded that we block them globally, which we refused to do, arguing that Turkish law cannot apply outside Turkey. As a result YouTube has been blocked there.
Finally, our ads products have the most restrictive policies, because they are commercial products intended to generate revenue.
These policies are always evolving. Decisions to allow, restrict or remove content from our services and products often require difficult judgment calls. We have spirited debates about the right course of action, whether it’s about our own content policies or the extent to which we resist a government request. In the end, we rely on the principles that sit at the heart of everything we do.
We’ve said them before, but in these particularly challenging times, they bear repeating: We have a bias in favor of people's right to free expression. We are driven by a belief that more information means more choice, more freedom and ultimately more power for the individual.
Showing posts with label filtering. Show all posts
Showing posts with label filtering. Show all posts
Monday, April 19, 2010
Sunday, February 14, 2010
Our submission on mandatory ISP level filtering
There has been a lot of attention around the Australian Government's mandatory ISP level filtering proposal. Google--and many of you--have argued that the proposal goes too far, with a broad-scoped filter, and a regime which takes the focus off more important areas such as online safety education and better support for policing efforts.
In December we expressed our concern with the Government's filtering proposal in this blog. Today we join the Australian Library and Information Association (ALIA), which represents 12 million library users around Australia, Yahoo! and the Inspire Foundation in proposing some core principles for a Safer Internet. We also expand on our views in a submission to the Department of Broadband, Communications and the Digital Economy.
Here are the highlights from our submission:
YouTube is a platform for free expression. We have clear policies about what is allowed and not allowed on the site. For example, we do not permit hate speech or sexually explicit material, and all videos uploaded must comply with our Community Guidelines. Like all law-abiding companies, YouTube complies with the laws in the countries in which we operate. When we receive a valid legal request, like a court order, to remove content alleged to violate local laws, we first check that the request complies with the law, and we will seek to narrow it if the request is overly broad. Beyond these clearly defined parameters, we will not remove material from YouTube.
Our view is that online safety should focus on user education, individual user empowerment through technology tools (such as SafeSearch Lock, Safety Mode on YouTube), and cooperation between law enforcement and industry partners. We're partnering with some tremendous organisations in Australia towards this goal.
Posted by Iarla Flynn, Head of Policy, Google Australia
In December we expressed our concern with the Government's filtering proposal in this blog. Today we join the Australian Library and Information Association (ALIA), which represents 12 million library users around Australia, Yahoo! and the Inspire Foundation in proposing some core principles for a Safer Internet. We also expand on our views in a submission to the Department of Broadband, Communications and the Digital Economy.
Here are the highlights from our submission:
It would block some important content. The scope of content to be filtered ("Refused Classification" or "RC") is very wide. The report Untangling The Net: The Scope of Content Caught By Mandatory Internet Filtering has found that a wide scope of content could be prohibited including not just child pornography but also socially and politically controversial material. This raises genuine questions about restrictions on access to information, which is vital in a democracy.
It removes choices. The Government's proposal removes choices for parents as to what they and their children can access online. Moreover a filter may give a false sense of security and create a climate of complacency that someone else is managing your (or your children's) online experience.
It isn't effective in protecting kids. A large proportion of child sexual abuse content is not found on public websites, but in chat-rooms or peer-to-peer networks. The proposed filtering regime will not effectively protect children from this objectionable material.Moreover, the filter appears to not work for high volume sites such as Wikipedia, YouTube, Facebook, Twitter, as the impact of the filter on Internet access speeds would be too great.
YouTube is a platform for free expression. We have clear policies about what is allowed and not allowed on the site. For example, we do not permit hate speech or sexually explicit material, and all videos uploaded must comply with our Community Guidelines. Like all law-abiding companies, YouTube complies with the laws in the countries in which we operate. When we receive a valid legal request, like a court order, to remove content alleged to violate local laws, we first check that the request complies with the law, and we will seek to narrow it if the request is overly broad. Beyond these clearly defined parameters, we will not remove material from YouTube.
Our view is that online safety should focus on user education, individual user empowerment through technology tools (such as SafeSearch Lock, Safety Mode on YouTube), and cooperation between law enforcement and industry partners. We're partnering with some tremendous organisations in Australia towards this goal.
Posted by Iarla Flynn, Head of Policy, Google Australia
Labels:
cyber safety,
filtering
Tuesday, December 15, 2009
Our views on Mandatory ISP Filtering
At Google we are concerned by the Government's plans to introduce a mandatory filtering regime for Internet Service Providers (ISP) in Australia, the first of its kind amongst western democracies.* Our primary concern is that the scope of content to be filtered is too wide.
We have a bias in favour of people's right to free expression. While we recognise that protecting the free exchange of ideas and information cannot be without some limits, we believe that more information generally means more choice, more freedom and ultimately more power for the individual.
Some limits, like child pornography, are obvious. No Australian wants that to be available – and we agree. Google, like many other Internet companies, has a global, all-product ban against child sexual abuse material and we filter out this content from our search results. But moving to a mandatory ISP filtering regime with a scope that goes well beyond such material is heavy handed and can raise genuine questions about restrictions on access to information.
The recent report by Professors Catharine Lumby, Lelia Green, and John Hartley, Untangling The Net: The Scope of Content Caught By Mandatory Internet Filtering, has found that a wide scope of content could be prohibited under the proposed filtering regime. Refused Classification (or RC) is a broad category of content that includes not just child sexual abuse material but also socially and politically controversial material -- for example, educational content on safer drug use -- as well as the grey realms of material instructing in any crime, including politically controversial crimes such as euthanasia. This type of content may be unpleasant and unpalatable but we believe that government should not have the right to block information which can inform debate of controversial issues.
While the discussion on ISP filtering continues, we should all retain focus on making the Internet safer for people of all ages. Our view is that online safety should focus on user education, user empowerment through technology tools (such as SafeSearch Lock), and cooperation between law enforcement and industry partners. The government has committed to important cybersafety education and engagement programs and yesterday announced additional measures that we welcome.
Exposing politically controversial topics for public debate is vital for democracy. Homosexuality was a crime in Australia until 1976 in ACT, NSW in 1984 and 1997 in Tasmania. Political and social norms change over time and benefit from intense public scrutiny and debate. The openness of the Internet makes this all the more possible and should be protected.
The government has requested comments from interested parties on its proposals for filtering and we encourage everyone to make their views known in this important debate.
Posted by Iarla Flynn, Head of Policy, Google Australia
Updated: December 16, 2009 at 5:00 PM
* Germany and Italy have mandatory ISP filtering, however in both cases they are of a clearly limited scope. In Germany, the scope is child abuse material and in Italy, it is child abuse material and unlawful gambling sites. Australia's proposed regime would uniquely combine a mandatory framework and a much wider scope of content, the first of its kind in the democratic world.
We have a bias in favour of people's right to free expression. While we recognise that protecting the free exchange of ideas and information cannot be without some limits, we believe that more information generally means more choice, more freedom and ultimately more power for the individual.
Some limits, like child pornography, are obvious. No Australian wants that to be available – and we agree. Google, like many other Internet companies, has a global, all-product ban against child sexual abuse material and we filter out this content from our search results. But moving to a mandatory ISP filtering regime with a scope that goes well beyond such material is heavy handed and can raise genuine questions about restrictions on access to information.
The recent report by Professors Catharine Lumby, Lelia Green, and John Hartley, Untangling The Net: The Scope of Content Caught By Mandatory Internet Filtering, has found that a wide scope of content could be prohibited under the proposed filtering regime. Refused Classification (or RC) is a broad category of content that includes not just child sexual abuse material but also socially and politically controversial material -- for example, educational content on safer drug use -- as well as the grey realms of material instructing in any crime, including politically controversial crimes such as euthanasia. This type of content may be unpleasant and unpalatable but we believe that government should not have the right to block information which can inform debate of controversial issues.
While the discussion on ISP filtering continues, we should all retain focus on making the Internet safer for people of all ages. Our view is that online safety should focus on user education, user empowerment through technology tools (such as SafeSearch Lock), and cooperation between law enforcement and industry partners. The government has committed to important cybersafety education and engagement programs and yesterday announced additional measures that we welcome.
Exposing politically controversial topics for public debate is vital for democracy. Homosexuality was a crime in Australia until 1976 in ACT, NSW in 1984 and 1997 in Tasmania. Political and social norms change over time and benefit from intense public scrutiny and debate. The openness of the Internet makes this all the more possible and should be protected.
The government has requested comments from interested parties on its proposals for filtering and we encourage everyone to make their views known in this important debate.
Posted by Iarla Flynn, Head of Policy, Google Australia
Updated: December 16, 2009 at 5:00 PM
* Germany and Italy have mandatory ISP filtering, however in both cases they are of a clearly limited scope. In Germany, the scope is child abuse material and in Italy, it is child abuse material and unlawful gambling sites. Australia's proposed regime would uniquely combine a mandatory framework and a much wider scope of content, the first of its kind in the democratic world.
Subscribe to:
Posts (Atom)