Tag Archive for "Internal Affairs"

Archives Educates the DIA

Back in August I posted about how the Department of Internal Affairs had been deleting the reports used to justify filtering sites in the trial. This seemed a bit suspect to me, especially as they knew I had a request with the Ombudsman appealing their refusal to give me copies of them under the Official Information Act.

It’s also in contravention of the Public Records Act so I sent a letter asking the Chief Archivist what could be done about it. You can download the PDF of their reply, but the gist of it is in the following quote:

The Department of Internal Affairs report that they have taken steps to address this problem. The Department of Internal Affairs have made their staff familiar with the mandatory standards issued by the Chief Archivist that are relevant to managing these records in accordance with the Public Records Act 2005. Steps have also been taken to ensure that website filtering records cannot be deleted without seeking the necessary authority to do so.

I consider that these are appropriate remedial steps that will result in ongoing compliance with the Act.

I’m sure we can all be pleased that the Censorship Unit at the DIA will now do a better job of maintaining their data.

Update on Filtering

Today I met with some of the staff in the Censorship Unit at the Department of Internal Affairs to discuss the Internet filtering system.

Here’s some of what I learnt:

  • The Censorship Unit prosecute approximately 40-50 people a year for trading in child pornography, with a conviction rate of over 90%. Most of these are using P2P file sharing.
  • The purpose of the filter is not to stop the hard core traders, but to stop the casual and curious. The view is that a curious person will be sucked into getting more and more.
  • The Enterprise (final live system) Internet filtering system will be installed in Auckland, Wellington and Christchurch. Initially all traffic will go through the Auckland location with the others as redundant fail-over sites, eventually the traffic will be load-balanced between the sites.
  • The system also has redundant Internet connections.
  • The DIA claims that a major outage would be resolved in 5-10 minutes at worst.
  • The DIA say that the cost of the system is approximately $30k a year plus Internet and staff costs.
  • They really do re-check 7000 sites each month. Apparently there are three checkers who spend about an hour each working day, checking about 120 sites each an hour.
  • The Code of Practice is being rewritten somewhat in response to the submissions. In particular, the role of the Independent Reference Group (IRG) will be better defined.
  • The IRG will have access to the reports about the websites as well as the details of the appeal.
  • The DIA have been speaking to likely bodies to see if they wish to be part of the IRG.
  • There may be a role for the Office of Film and Literature Classification in auditing the list of banned sites.
  • We confirmed that the system doesn’t work with HTTPS (encrypted web traffic) and the new IP version 6.
  • The NetClean (the filtering product being used) contract specifies that the system can only be used to filter child pornography.
  • They say that they wouldn’t add Wikileaks to the filter if a copy of the list turned up there.

I will be updating the FAQs accordingly.

No! But yes.

I have received another letter from the DIA (PDF) in response to further requests.

As well as a copy of the whitepaper that I’ve already written about, I asked the DIA to clarify “whether the filter will only be used for images of child sexual abuse or will it also be used for text files as described by Trevor Henry, Senior Communications Advisor on 17/7/2009”.

The response from Steve O’Brien, manager of the Censorship Compliance Unit, was as follows:

Unfortunately, Trevor Henry’s statement has been taken by some commentators as “proof” that the scope of the Digital Child Exploitation Filtering System will expand. As stated above, the purpose of the filtering system is to block access to known websites that contain images of child sexual abuse.

Well, I’m glad we cleared that up, the filter is only going to be used for pictures. But wait, what’s this, he hasn’t finished yet:

These websites sometimes also contain text files that exploit children for sexual purposes, and where this occurs those text files will also be blocked.

The concept of a text file exploiting a child seems odd to me.

Even ignoring the slightly ludicrous phrasing, part of the rhetoric around the implementation of the filter has been that they are trying to stop photos of actual abuse – “…will focus solely on websites offering clearly objectionable images of child sexual abuse.” There is no child being abused in a text file.

The examples given by Trevor Henry clearly demonstrate that written material can be just as abusive as pictures.

I don’t believe that he did clearly demonstrate that (read what he wrote). While what he describes is creepy, to my mind there is a clear distinction between writing about how to abuse a child and actually abusing one.

So, has this response cleared anything up? The answer has to be no. The DIA is still claiming that it’s trying to only ban images of child sexual abuse (something I might be inclined to support if they did it openly and if it had any chance of working) while at the same time admitting that they’ll ban other things that aren’t images as well.

DIA Report on Internet Filter Test

The Department of Internal Affairs (DIA) have released to me their report (PDF) on the testing of the Internet Filtering system.

The first half of it is a description of the system and doesn’t really contain much new information (except that we now know it runs on FreeBSD and uses the Quagga BGP daemon).

The second half of it is more interesting as it has some results from the DIA’s testing. This was apparently split into three phases:

  1. Single ISP with 5,000 users ((already had their own filtering system so it was probably Watchdog).
  2. Two ISPs with 25,000 users.
  3. Four ISPs with 600,000 users (at a guess this was when Ihug and TelstraClear joined).

Before we go on, a brief reminder of how it works: The ISP diverts all requests that are on the same Internet address as one of the blocked sites. The filter then checks each diverted request and decides whether to block it or let it through. The filter never sees requests for websites that don’t share an Internet address with a blocked site.

Interceptions

Now, back to the numbers. The phrasing in the whitepaper is a bit hard to interpret, the following is based on my best attempt at understanding it:

In phase 1, the system apparently had 3 million requests diverted to it each month and blocked 10,000 of those requests. This means that only a third of 1% of processed requests ended up being blocked.

In phase 2, there’s 8 million requests per month with 30,000 of them being blocked.

In phase 3, there’s 40 million requests per month with 100,000 of them being blocked.

In other words, there’s a very large number of requests being filtered through the DIA’s server compared to the number that are being blocked.

Effectiveness

There’s no way to measure the effectiveness of the filter at stopping people from finding child pornography – we can’t tell how many people worked around it or downloaded material using peer to peer filesharing or other methods.

One interesting number, however, is the number of blocked requests per user.

In phase 1, there’s 2 blocked requests per user per month (10,000 blocked requests per month/5000 users).

In phase 2, there’s just over 1 blocked request per user per month (average 30,000 blocked requests per month, 25,000 users).

In phase 3, there’s 0.17 (average 100,000 blocked requests per month, 600,000 users).

What’s odd is the way that the number of blocked requests per user go down phase by phase. I have no idea what this indicates.

Robustness

According to the report, the system was operating at 80% capacity in the third phase. Apparently this was a bit much for it as: “the system did experience some stability issues processing this amount of requests and required maintenance on two occasions to replace hardware.”

There is no further detail about whether the “80% capacity” referred to the performance of the filtering system or the Internet connection they were using.

Government Department Powers

I sent in a submission to the Department of Internal Affairs about the Internet filtering scheme. Originally I was intending to run the normal arguments (which I’m sure anyone reading this is already familiar with) but I started to think about the constitutionality of the scheme – and my submission got away on me a bit.

Here’s some of the questions I’m thinking about:

  1. Where does the DIA derive the authority to create and implement the Internet filtering scheme from? I can’t find any basis for it in the 1993 Films, Videos and Publications Classification Act that they mention in the draft Code of Practice.
  2. Can Government departments just make new powers up? What avenues are available to stop them?
  3. Even if there was some basis in the 1993 law, it’s very clear about censorship decisions having to be published, and also defines an accountable appeals process. The DIA seems to have completely ignored this when designing their scheme. Again, how can they do this and how can we stop them?
  4. When a new law is proposed that has Bill of Rights implications (“Everyone has the right to be secure against unreasonable search or seizure, whether of the person, property, or correspondence or otherwise” seems relevant), the Attorney-General has to submit a report to Parliament on those implications. Is there anything similar when Government departments create new powers for themselves? How does the Bill of Rights work if government departments can just ignore it?

If anyone has any answers to these questions or can point me in the direction of a constitutional lawyer who wouldn’t mind giving some free advice for a good cause, I’d be very grateful.

Why Even an Ineffectual Filter is Worrying

A friend recently said that he thought he’d found a flaw in my arguments. Firstly I was saying that the DIA’s Internet filtering scheme won’t really work, and secondly I was saying that it was the first step on the slippery slope of out of control Internet censorship. How can the filtering scheme be a threat if it doesn’t even work?

There are two answers to this.

1. Does the Filter Work?

The Internet filtering scheme proposed by the Department of Internal Affairs is good at some things and bad at others.

What It’s Good At

The Netclean filter used by the DIA is limited to stopping access to particular websites or parts of websites based on their Internet address and path. This means that it’s good at stopping casual access to a known web-page that doesn’t get moved around.

If, for the sake of argument, the DIA decided to use the system to ban access to a certain page on Wikipedia, this would easily stop normal Internet users from accessing the page. They would try to visit the page, they’d get the page saying it had been banned – and they’d stop there because they probably don’t really care that much, nor do they know how to get around the filter.

What It’s Not so Good At

It’s not very good at stopping people who are deliberately trading illegal material. Firstly, they’re coordinating the trading by chat and then sharing the files using peer to peer (P2P) systems – both of which aren’t blocked by the DIA’s Internet filter. Secondly, the content keeps getting moved around in order to avoid being shut down. Thirdly, the people doing this know that what they’re doing is wrong and illegal, so they’re actively taking measures to protect themselves such as using encrypted proxies in other countries. The filter will hardly even slow them down.

The more conspiracy-minded among you might ask why the DIA are trying to implement a scheme that won’t do a very good job of achieving it’s stated purpose but could be used to block access for normal people to normal websites.

2. The Filtering Principle

The more important reason to my mind is the slippery slope argument. While the currently proposed Internet filtering scheme is more ineffectual than scary, a successful implementation will establish some important and far-reaching principles such as:

  1. The Department of Internal Affairs has the right to arbitrarily decide to filter the Internet.
  2. The DIA has the right to decide what material should be filtered.
  3. It is acceptable for the government to intercept and examine Internet traffic without a search warrant.
  4. When censoring Internet content there is no need to meet the same oversight requirements that apply when censoring books or movies.
  5. The ISPs will happily censor their users.

I don’t agree with these principles, and once they’re established in practice it will be significantly harder to argue against them in the future if things change. For example, if the DIA decided to change the methodology used for the filtering to a more invasive/disruptive one, or chose to drastically extend the scope of the material to be filtered.

Answer

So, to answer the original question, the DIA’s proposed Internet filter will be ineffectual at stopping the trade in child pornography, and it’s the implications of implementing it that particularly worry me.

DIA Admits Filter Shortcomings

Don’t just take my word for it, let’s see what the Department of Internal Affairs has to say about how well their system works (the following quoted text is all from the draft Code of Practice):

ISPs Might Not Participate

Participation in the Digital Child Exploitation Filtering System by ISPs is therefore voluntary and this provides an effective means of ensuring that the system keeps to its stated purpose. If ISPs become uncomfortable with the direction of the system, they can withdraw.

Doesn’t Prevent the Creation of Illegal Material and the Exploitation of Children

The Department of Internal Affairs appreciates that website filtering is only partially effective in combating the trade in child sexual abuse images. In particular website filtering is effective only after the fact and does not prevent the creation of illegal material nor, in the case of images of child sexual abuse, the exploitation of children.

Doesn’t Catch the People Doing It

The system also will not remove illegal content from its location on the Internet, nor prosecute the creators or intentional consumers of this material.

Can Easily be Circumvented

The Department also acknowledges that website filtering systems are not 100% effective in preventing access to illegal material. A person with a reasonable level of technical skill can use tools that are freely available on the Internet to get around the filters.

Doesn’t Stop File Sharing or Chatrooms

As illegal material, such as child sexual abuse images, is most often traded on peer-to-peer networks or chatrooms, which will not be filtered, the Censorship Compliance Unit carries out active investigations in those spaces.

Might Give Parents a False Sense of Security

The Department is aware that a website filter could give parents a false sense of security regarding their children’s online experience. Filters are unable to address all online risks, such as cyber-bullying, online sexual predators, viruses, or the theft of personal information.

Maybe the DIA should persuade themselves that Internet filtering is a good idea before trying to implement it.

DIA’s Draft Code of Practice

The Department of Internal Affairs has released a draft Code of Practice for the operation of their Internet filtering scheme.

The Code of Practice provides a good overview of how the system works and explains the policies that the DIA will use to manage the list. None of it should be a surprise to anyone who has read the FAQs and other articles on this website.

Rather than rehash the arguments for or against the filtering (although it is amusing to note that the Code admits a number of the shortcomings with the system), I’d rather write about their proposal for an Independent Reference Group that they have included in the draft Code.

The Department will institute an Independent Reference Group (IRG) to maintain oversight of the operation of the Digital Child Exploitation Filtering System to ensure it is operated with integrity and adheres to the principles set down in this Code of Practice.

That reads well enough but finding out what the IRG will actually do is a bit tricker. The first reference to them is in relation to people appealing decisions to block particular sites: “The appeals submitted and the actions taken will be reported to the IRG. The reports will be published on the Department’s website.”

The second is merely about reviewing the Code of Practice: “The Department, in conjunction with the IRG, will review the Code after 12 months operation. ”

The first major omission in the Code is the method by which the members of the IRG are going to be selected. We need to know that to get some idea of just how Independent the RG will be.

Secondly, there is no real guidance to the workings of the IRG. How often will they meet? What powers will they have? What happens if there is a disagreement between the DIA and the IRG about the inclusion of a particular site?

Thirdly, note that the IRG still has to rely on the DIA as they’re the ones that judge any appeals. The IRG only sees the report of the DIA’s inspector. Is the IRG going to be able to anything more than say “The DIA told us that they’re doing everything correctly”?

Overall, I’m not sure if the IRG really adds much to the need for openness and transparency. Once again the DIA’s desire to keep the filter list secret ends in the inevitable “you have to trust us”, except that this time we have to trust the IRG who has to trust the DIA.

Naturally I’ll be making a submission.

Another Internal Affairs Letter

A further response from the Department of Internal Affairs (page 1, page 2, page 3).

Some points from their letter:

  • The Internet filtering system will be going live “within two months”.
  • Public records? Those reports that we used to censor certain websites based on their content and then deleted are public records? Cor.
  • Confirmation that all traffic (chat, file sharing, email, etc) other than web traffic is passed through the filter without being checked.
  • Confirmation that all HTTPS (secure web traffic such as used by banks and shopping sites) is passed through the filter without being checked.
  • Three people will be employed maintaining the filtering system, although they might have other duties as well.

Internet Connection

The Internet filter server will be using a “fibre optic cable at 100Mb/sec” at a cost of $2000 per month.

After talking to people in the industry, this sounds like it will be a connection through the Wellington Citylink network and at that price will probably include 5-10Mbps of paid for Internet bandwidth.

Filtered Content

Finally there are a series of questions & answers about the type of content they’re blocking.

First they respond with “All of the websites that were on the filtering list hosted images of child sexual abuse.”

But when asked about link sites, the response is very carefully written: “Some of the websites that were on the filtering list contained thumbnail sized child sexual abuse images as part of galleries and links to websites hosting objectionable material. There were no sites that did not include images of child sexual abuse, even if only thumbnail images.”

Of course, this in direct contradiction to what they have told other people as documented here (in both the article and the comments).

I hope the Ombudsman hurries up with their decision about releasing the list.

The Scope Always Creeps

Three Statements

From the DIA press release titled “Web filter will focus solely on child sex abuse images”:

A filtering system to block websites… will focus solely on websites offering clearly objectionable images of child sexual abuse.

From a letter from Nathan Guy, Minister of Internal Affairs:

I support the Digitial Child Exploitation Filtering System that has been developed by the Department of Internal Affairs to help prevent access to child sexual abuse images. The filtering system will focus solely on known websites offering clearly objectionable images of child sexual abuse.

However, Rory McKinnon has also been writing to the DIA to collect information for an article he wrote for the NZPA. He asked: “Can the minister personally guarantee that the blacklist will not “creep” (that is, expand its scope to include anything other than child sex abuse)?” The response (written by Trevor Henry, Senior Communications Advisor, Regulation and Compliance – 17/7/2009) to this letter was a little different:

The Department confirms that the scope of the filter will be confined to websites carrying images of children being sexually abused but there may be circumstances when a website that contains text files might be blocked. For example, an instructional manual for child abuse or a diary relating to the abuse of an actual child might be blocked.

Scope Creep

You may note that the first two statements very clearly say that the only websites to be filtered will be those that host images of children being sexually abused. The third statement contradicts the first two. It includes examples of other material that might be blocked by the filter, such as a text manual or a description of abuse.

Now, maybe you don’t think that filtering a manual or diary about sexual abuse is that bad – but that’s part of the problem. It often makes sense to extend something just a little bit further… then a little bit further… until finally you end up somewhere quite different from where you started.

For example, if you can filter an article that describes how to sexually abuse a child, can you also filter an article that explains how to get around the filter to read the first article?

The Australian system experienced this exact problem. They started by banning objectionable images and ended up banning entire websites that revealed what was being filtered.

Trusting the Government

Now, it probably bears repeating that I have no sympathy for those who create and distribute child pornography. My argument has always been that filtering won’t work, and that doing it secretly will lead the government to abuse it.

On the second point, the DIA’s position has been that we have to trust them and whatever secret review process they set up. But why should we trust them?

They refuse to give anyone a copy of the full list so that it can be audited, and when asked for a partial version of the list without the full addresses, they revealed that they have started deleting the records so that no one else can audit them either.

Now we find that their statements about only filtering images are also incorrect. The scope of what they’re doing has already grown before the system is out of the trial phase.

Internet filtering is not going to stop child abuse or child pornography and any such scheme is too open to abuse by the government. We should abandon it now – personally I’d rather we spent the money on preventing child abuse.