Tag Archive for "internet"
Companies are increasing their use of Twitter but it’s not always obvious what they should be doing on it.
It doesn’t help that different people have different opinions. I’m not interested in following companies to get deals or beg for favours, and I detest “retweet to win” contests that try to turn everyone into unpaid spammers.
Here’s five times that companies made a positive impression on me by interacting over Twitter:
- Welcomed Whittakers (@whittakersnz) to Twitter and asked when they were going to do Easter Eggs. Got told they were working on it.
- Orcon offered to help me resolve a problem with their service. (Not solvable by them, it needed Telecom to pull finger.)
- I complained about Netgear service and got a phone call (!) from their PR company in Australia. I declined their offer to help and sorted it out through the usual channels the next day.
- Responded to a question about where to get batteries by suggesting Dick Smith. Someone else responded and speculated that their house-brand batteries might not be as good. @DickSmithNZ responded with details of their current sale on batteries as well as a link to a report showing that their batteries were as good as the name-brand ones.
- I said I was switching from Vodafone to Telecom and received a “Welcome on board” from @TelecomNZ.
Each of these companies treated me as a person and, by doing so, made me feel better about dealing with them.
The winner is Dick Smith for their quick and useful response – but I admit I’m still hanging out for those easter eggs from Whittakers!
The Law Commission’s report Suppressing Names and Evidence is a waste of time and money. They have spent a lot of time thinking about exactly why, how and what information should be suppressed, while neglecting to consider whether this suppression is even possible.
The Recent Case
I assume you’ve heard of the “well known entertainer” that was recently granted name suppression after the judge discharged them without conviction for offensive behaviour. For some reason a number of people felt it was so important that they tell everyone who it was that I found out their name on:
- Online chat
- Kiwiblog
I’m told it was also on the Trade Me forums as well as many others. Even the Wikipedia page for the performer has the details – if you think to look in the edit history.
Takedown and Blocking Notices
Publication of this sort of information on the Internet can’t be stopped. While you could send a takedown notice to local sites (such as Trademe and Kiwiblog) and expect it to be honoured, overseas sites such as Facebook and Twitter are going to ignore it.
The Law Commission seems to suggest that it will be the responsibility of ISPs to block access to sites publishing such information (recommendation 26 from the report):
Where an internet service provider or content host becomes aware that they are carrying or hosting information that they know is in breach of a suppression order, it should be an offence for them to fail to remove the information or to fail to block access to it as soon as reasonably practicable.
In the case above this would mean that ISPs would have to block the Facebook and Twitter web pages (the nature of these services means that you can’t just block a single piece of information as it could appear on any number of pages/URLs). They’d also have to block a number of other international forum sites. Ultimately, we would end up with the requirement to block every website in the world that contains content submitted by the users of the site.
If we get to this point the Internet in New Zealand is fundamentally broken and we’ve decided to stop being a member of the information age. Obviously this is not going to happen.
Is Name Suppression Dead?
If the courts can’t suppress information on the Internet is there any point continuing with suppression at all?
One counter argument is that not everyone is of as much interest as a “well known entertainer” so in some cases name suppression might continue to work. But the current trend is for people to put more and more of their lives, and the lives of the people they know, online. Over time I expect suppression to become less and less effective, even for people who don’t have a national profile.
You may notice that this article makes no comment on whether name suppression is good or bad. I’ve not always been happy with how it is used but, in general, I’m not completely against the concept, especially when it is used to protect the victim.
The problem is that my opinion, just like that of the Law Commission, is becoming increasingly irrelevant. The Internet does such a good job of sharing information that the idea of being able to control access to that information is becoming obsolete.
Court ordered suppression might work partially for a few more years but the end is in sight. The Law Commission would have done a better job if they had recognised this.
Sometimes it seems that every day there is another threat to people’s abilities to use the Internet. Each special interest group has their own barrow to push, often with honourable intent, that causes them to make impossible or unreasonable demands.
Today’s effort is from the Law Commission. They’ve published their Suppressing Names and Evidence report and it includes the following (recommendation 26 from the report, page 66, PDF):
Where an internet service provider or content host becomes aware that they are carrying or hosting information that they know is in breach of a suppression order, it should be an offence for them to fail to remove the information or to fail to block access to it as soon as reasonably practicable.
There’s nothing new in extending the current rules about not publishing suppressed material to hosting an Internet website publishing the suppressed material. Obviously someone will have to complain to the ISP (Internet Service Provider) that are hosting suppressed information, but the ISP will be able to refer to the judge’s suppression order and remove it. (Although of course there may be times when it is unclear whether a particular piece of information breaches a suppression order.)
The more worrying part is the use of the word “carrying” which, as far as I can tell, can only refer to information that the ISP is carrying between ‘somewhere on the Internet’ and the user.
By demanding that the ISP be able to block access to this information, the Law Commission is requiring all ISPs to implement a filtering system that is capable of blocking any access on any Internet protocol to any Internet address that may have the suppressed information. If they fail to do so, penalties include fines and imprisonment (exactly how you imprison an ISP I am not sure).
There are a number of problems with this:
- Each ISP would have to implement a filtering system (both technically and procedurally) and this would be very expensive.
- It puts an unreasonable responsibility on the ISP.
- Who would be responsible for removing information blocks when a suppression order is lifted?
- Most importantly, what they have asked for is technically impossible to implement.
Why is it technically impossible?
Information is shared on the Internet using a number of different methods (protocols). They include email, online chat, web pages, and peer to peer file-sharing. A number of these different protocols use encryption between the user and the site. For example, banks and online shops all use secure web traffic (HTTPS) to keep your transactions safe from interception.
If a piece of suppressed information is made available at an Internet address that uses encryption, the ISP can’t read the encrypted request and will therefore have to block all traffic to that Internet address. If your online shop uses the same Internet address as a site with suppressed information (sharing Internet addresses is very common, with some addresses hosting thousands of sites), access to your shop will also be blocked.
This means that every time someone overseas publishes information contrary to a suppression order from the New Zealand courts, a number of websites will have to be blocked. This will fundamentally break the Internet in New Zealand.
Of course, if you run a bookstore in New Zealand I suggest that you might find it advantageous to make sure to add some suppressed information to a review on amazon.com!
This doesn’t even cover the technical difficulties and costs involved in deploying a blocking system that can filter everything on the Internet. You may note that the Chinese Government has spent a lot of time and effort building their Great Firewall of China and even that does a poor job of blocking information.
Conclusion
I believe the Law Commission needs to rethink this recommendation. The blocking they have asked for is technically impossible to implement without breaking the Internet.
Of course, if it’s impossible to suppress information on the Internet, is there any point in suppressing it in newspapers and other media? We may have to accept that we cannot suppress information on a pervasive global communications network.
It looks as though the Law Commission’s report may be obsolete on the day it was published.
Today I met with some of the staff in the Censorship Unit at the Department of Internal Affairs to discuss the Internet filtering system.
Here’s some of what I learnt:
- The Censorship Unit prosecute approximately 40-50 people a year for trading in child pornography, with a conviction rate of over 90%. Most of these are using P2P file sharing.
- The purpose of the filter is not to stop the hard core traders, but to stop the casual and curious. The view is that a curious person will be sucked into getting more and more.
- The Enterprise (final live system) Internet filtering system will be installed in Auckland, Wellington and Christchurch. Initially all traffic will go through the Auckland location with the others as redundant fail-over sites, eventually the traffic will be load-balanced between the sites.
- The system also has redundant Internet connections.
- The DIA claims that a major outage would be resolved in 5-10 minutes at worst.
- The DIA say that the cost of the system is approximately $30k a year plus Internet and staff costs.
- They really do re-check 7000 sites each month. Apparently there are three checkers who spend about an hour each working day, checking about 120 sites each an hour.
- The Code of Practice is being rewritten somewhat in response to the submissions. In particular, the role of the Independent Reference Group (IRG) will be better defined.
- The IRG will have access to the reports about the websites as well as the details of the appeal.
- The DIA have been speaking to likely bodies to see if they wish to be part of the IRG.
- There may be a role for the Office of Film and Literature Classification in auditing the list of banned sites.
- We confirmed that the system doesn’t work with HTTPS (encrypted web traffic) and the new IP version 6.
- The NetClean (the filtering product being used) contract specifies that the system can only be used to filter child pornography.
- They say that they wouldn’t add Wikileaks to the filter if a copy of the list turned up there.
I will be updating the FAQs accordingly.
I have received another letter from the DIA (PDF) in response to further requests.
As well as a copy of the whitepaper that I’ve already written about, I asked the DIA to clarify “whether the filter will only be used for images of child sexual abuse or will it also be used for text files as described by Trevor Henry, Senior Communications Advisor on 17/7/2009”.
The response from Steve O’Brien, manager of the Censorship Compliance Unit, was as follows:
Unfortunately, Trevor Henry’s statement has been taken by some commentators as “proof” that the scope of the Digital Child Exploitation Filtering System will expand. As stated above, the purpose of the filtering system is to block access to known websites that contain images of child sexual abuse.
Well, I’m glad we cleared that up, the filter is only going to be used for pictures. But wait, what’s this, he hasn’t finished yet:
These websites sometimes also contain text files that exploit children for sexual purposes, and where this occurs those text files will also be blocked.
The concept of a text file exploiting a child seems odd to me.
Even ignoring the slightly ludicrous phrasing, part of the rhetoric around the implementation of the filter has been that they are trying to stop photos of actual abuse – “…will focus solely on websites offering clearly objectionable images of child sexual abuse.” There is no child being abused in a text file.
The examples given by Trevor Henry clearly demonstrate that written material can be just as abusive as pictures.
I don’t believe that he did clearly demonstrate that (read what he wrote). While what he describes is creepy, to my mind there is a clear distinction between writing about how to abuse a child and actually abusing one.
So, has this response cleared anything up? The answer has to be no. The DIA is still claiming that it’s trying to only ban images of child sexual abuse (something I might be inclined to support if they did it openly and if it had any chance of working) while at the same time admitting that they’ll ban other things that aren’t images as well.
The Department of Internal Affairs (DIA) have released to me their report (PDF) on the testing of the Internet Filtering system.
The first half of it is a description of the system and doesn’t really contain much new information (except that we now know it runs on FreeBSD and uses the Quagga BGP daemon).
The second half of it is more interesting as it has some results from the DIA’s testing. This was apparently split into three phases:
- Single ISP with 5,000 users ((already had their own filtering system so it was probably Watchdog).
- Two ISPs with 25,000 users.
- Four ISPs with 600,000 users (at a guess this was when Ihug and TelstraClear joined).
Before we go on, a brief reminder of how it works: The ISP diverts all requests that are on the same Internet address as one of the blocked sites. The filter then checks each diverted request and decides whether to block it or let it through. The filter never sees requests for websites that don’t share an Internet address with a blocked site.
Interceptions
Now, back to the numbers. The phrasing in the whitepaper is a bit hard to interpret, the following is based on my best attempt at understanding it:
In phase 1, the system apparently had 3 million requests diverted to it each month and blocked 10,000 of those requests. This means that only a third of 1% of processed requests ended up being blocked.
In phase 2, there’s 8 million requests per month with 30,000 of them being blocked.
In phase 3, there’s 40 million requests per month with 100,000 of them being blocked.
In other words, there’s a very large number of requests being filtered through the DIA’s server compared to the number that are being blocked.
Effectiveness
There’s no way to measure the effectiveness of the filter at stopping people from finding child pornography – we can’t tell how many people worked around it or downloaded material using peer to peer filesharing or other methods.
One interesting number, however, is the number of blocked requests per user.
In phase 1, there’s 2 blocked requests per user per month (10,000 blocked requests per month/5000 users).
In phase 2, there’s just over 1 blocked request per user per month (average 30,000 blocked requests per month, 25,000 users).
In phase 3, there’s 0.17 (average 100,000 blocked requests per month, 600,000 users).
What’s odd is the way that the number of blocked requests per user go down phase by phase. I have no idea what this indicates.
Robustness
According to the report, the system was operating at 80% capacity in the third phase. Apparently this was a bit much for it as: “the system did experience some stability issues processing this amount of requests and required maintenance on two occasions to replace hardware.”
There is no further detail about whether the “80% capacity” referred to the performance of the filtering system or the Internet connection they were using.
A friend recently said that he thought he’d found a flaw in my arguments. Firstly I was saying that the DIA’s Internet filtering scheme won’t really work, and secondly I was saying that it was the first step on the slippery slope of out of control Internet censorship. How can the filtering scheme be a threat if it doesn’t even work?
There are two answers to this.
1. Does the Filter Work?
The Internet filtering scheme proposed by the Department of Internal Affairs is good at some things and bad at others.
What It’s Good At
The Netclean filter used by the DIA is limited to stopping access to particular websites or parts of websites based on their Internet address and path. This means that it’s good at stopping casual access to a known web-page that doesn’t get moved around.
If, for the sake of argument, the DIA decided to use the system to ban access to a certain page on Wikipedia, this would easily stop normal Internet users from accessing the page. They would try to visit the page, they’d get the page saying it had been banned – and they’d stop there because they probably don’t really care that much, nor do they know how to get around the filter.
What It’s Not so Good At
It’s not very good at stopping people who are deliberately trading illegal material. Firstly, they’re coordinating the trading by chat and then sharing the files using peer to peer (P2P) systems – both of which aren’t blocked by the DIA’s Internet filter. Secondly, the content keeps getting moved around in order to avoid being shut down. Thirdly, the people doing this know that what they’re doing is wrong and illegal, so they’re actively taking measures to protect themselves such as using encrypted proxies in other countries. The filter will hardly even slow them down.
The more conspiracy-minded among you might ask why the DIA are trying to implement a scheme that won’t do a very good job of achieving it’s stated purpose but could be used to block access for normal people to normal websites.
2. The Filtering Principle
The more important reason to my mind is the slippery slope argument. While the currently proposed Internet filtering scheme is more ineffectual than scary, a successful implementation will establish some important and far-reaching principles such as:
- The Department of Internal Affairs has the right to arbitrarily decide to filter the Internet.
- The DIA has the right to decide what material should be filtered.
- It is acceptable for the government to intercept and examine Internet traffic without a search warrant.
- When censoring Internet content there is no need to meet the same oversight requirements that apply when censoring books or movies.
- The ISPs will happily censor their users.
I don’t agree with these principles, and once they’re established in practice it will be significantly harder to argue against them in the future if things change. For example, if the DIA decided to change the methodology used for the filtering to a more invasive/disruptive one, or chose to drastically extend the scope of the material to be filtered.
Answer
So, to answer the original question, the DIA’s proposed Internet filter will be ineffectual at stopping the trade in child pornography, and it’s the implications of implementing it that particularly worry me.
UK’s Child Exploitation and Online Protection Centre
The UK’s Child Exploitation and Online Protection Centre (CEOP) has released its latest annual report (2008/2009). While they cover a much wider range of activity, one of the findings is highly relevant to the debate about Internet filtering in New Zealand.
Similarly we can report a step change in the way that offenders access images, with the vast majority of trading taking place on various peer to peer (P2P) platforms. Our focus must now be on tackling this as a priority.
What’s relevant about this? The Department of Internal Affairs’ proposed Internet filtering scheme only works with unencrypted websites, and is useless against P2P file trading.
How Voluntary Ends
The UK has a non-government voluntary web-filtering system that Internet Service Providers (ISPs) can sign up for. Well, as reported by the Independent, it won’t be voluntary very soon:
The leaked Home Office letter says a clause in the Police, Crime and Private Security Bill in the Queen’s Speech would “compel domestic ISPs to implement the blocking of illegal images of child sexual abuse”.
The New Zealand system is also voluntary for ISPs. I’m worried that the voluntary nature of the scheme will only last as long as it takes to get established.
Filtering Systems are an Opportunity for People Who Wish to Suppress Free Speech
Finally there’s an article about an attempt by Scientologists in Australia to not only have anti-Scientology sites added to the Australian Internet filter, but also sites that provide anonymisation services to those sites.
We have identified that such websites play a major role in the ongoing hate campaign against our Church and their removal or a restriction of access and of content would play a major role in preventing further religious vilification against us.
It is therefore recommended that the Australian Government take action to prevent to the creators of websites, whose primary purpose is the incitement of religious vilification, to be prevented from using programs such as WhoisGuard to conceal their identity, so that normal recourse to the law may be accessed as needed to defend basic rights covered by Australian law.
One of the problems with Internet filtering is that, once it’s established, there are no more technical and legal difficulties in banning material on the Internet. This means that the filter is now fair game to every group that has a bee in their bonnet about controlling access to certain sorts of information.
IT Minister Steven Joyce has finally sent me a reply. This was in response to me asking him about his statements in NBR against the idea of Internet filtering:
We have been following the internet filtering debate in Australia but have no plans to introduce something similar here.
The technology for internet filtering causes delays for all internet users. And unfortunately those who are determined to get around any filter will find a way to do so. Our view is that educating kids and parents about being safe on the internet is the best way of tackling the problem.
I asked:
- Are you still against the introduction of internet filtering?
- Does the National government have a policy for or against internet filtering?
His letter didn’t answer either question, the only substantive part of it was:
I would like to say first off that the voluntary website filering system being proposed in New Zealand by the Department of Internal Affairs is significantly different in design and scope from the mandatory system proposed in Australia.
He then listed a number of differences in the systems and suggested I talk to the Department of Internal Affairs about it.
What I find interesting about this letter is that he carefully avoids answering both of the questions I asked. Steven Joyce refuses to say that he now supports Internet filtering, and also refuses to take the opportunity to state whether the National government has a policy for or against it.
I find this lack of clear political support for the scheme to be heartening.
Don’t just take my word for it, let’s see what the Department of Internal Affairs has to say about how well their system works (the following quoted text is all from the draft Code of Practice):
ISPs Might Not Participate
Participation in the Digital Child Exploitation Filtering System by ISPs is therefore voluntary and this provides an effective means of ensuring that the system keeps to its stated purpose. If ISPs become uncomfortable with the direction of the system, they can withdraw.
Doesn’t Prevent the Creation of Illegal Material and the Exploitation of Children
The Department of Internal Affairs appreciates that website filtering is only partially effective in combating the trade in child sexual abuse images. In particular website filtering is effective only after the fact and does not prevent the creation of illegal material nor, in the case of images of child sexual abuse, the exploitation of children.
Doesn’t Catch the People Doing It
The system also will not remove illegal content from its location on the Internet, nor prosecute the creators or intentional consumers of this material.
Can Easily be Circumvented
The Department also acknowledges that website filtering systems are not 100% effective in preventing access to illegal material. A person with a reasonable level of technical skill can use tools that are freely available on the Internet to get around the filters.
Doesn’t Stop File Sharing or Chatrooms
As illegal material, such as child sexual abuse images, is most often traded on peer-to-peer networks or chatrooms, which will not be filtered, the Censorship Compliance Unit carries out active investigations in those spaces.
Might Give Parents a False Sense of Security
The Department is aware that a website filter could give parents a false sense of security regarding their children’s online experience. Filters are unable to address all online risks, such as cyber-bullying, online sexual predators, viruses, or the theft of personal information.
Maybe the DIA should persuade themselves that Internet filtering is a good idea before trying to implement it.