(This is another in my series of posts where I record my current feelings about technology in order to have something to laugh at in five years time. See also What I Use and Market Calibration.)
I’m getting urges for a new laptop again.
The main requirements when I got this laptop (Compaq 2510p) were:
- Small and light-weight (it’s 1.4kg)
- Long battery life (~6 hours)
- Runs Windows well (it’s part of my job)
The Compaq has worked pretty well for me but my biases are changing. My new requirements are:
- Relatively lightweight (but not so worried about small any more)
- Backlit keyboard (I do quite a lot of writing at night but I don’t touchtype. A backlit keyboard would make both Kim and me happier.)
- High resolution screen (at least higher than the current 1280×800)
- Well over four hours useful battery life (i.e. I might accept four but I really want more)
- I’d still choose battery life over performance and I’d prefer it not to have an optical drive to save weight.
And while it’s not necessary, I suspect that any laptop that meets those criteria will have one of the much cooler solid-state hard drives. They’re lighter and use less battery, plus they have no moving parts so they’re more reliable.
When it comes to the operating system, while I’d like to have a proper go using Mac OSX, Windows 7 fulfils most of my requirements and is more appropriate to my job.
Finally, it’d be nice if it looked kind of stylish, maybe even with a bit of colour.
Contenders
I recently had a play on a Dell Latitude Z600 and I have to admit it was pretty good even though it was much bigger than anything I’ve considered before. It’s thin and surprisingly light for its size. Sadly, while the 16″ screen at 1600×900 has more pixels than my current screen, you’d think they could have increased it even more. Unfortunately it costs an ungodly amount of money and I believe the battery life is apparently atrocious.
The HP Envy 13 also looks pretty good. Stylish, good screen (1600×900 is acceptable on a 13″ screen), good battery life – but no backlit keyboard.
Dell have just put out the Vostro V13 at a good price but I really don’t want to stay with a standard res screen and no keyboard backlight.
The Sony Z is rather cool. 13″ screen at 1600×900, good battery life, backlit keyboard… why not? Sadly I have absolute faith in Sony’s ability to screw it up by filling it full of crap software and failing to provide good hardware drivers. It’s a pity because otherwise I think it might be the winner.
When it comes to the Apple range, the MacBook Pro 13″ is rather nice (and the price recently dropped). It fulfils most of my requirements (except screen resolution) but the styling is looking a bit dated and I don’t trust Apple to do a good job of releasing drivers that will allow Windows 7 to work to its full potential.
As normal, I find myself wishing I could cut’n’paste features from multiple models so that I could end up with the perfect laptop!
Late addition
I ended up with the Sony Vaio Z (the high-end version with a 1920×1080 screen, 8GB memory, 256GB SSD, etc). It’s a very nice laptop but my fears that Sony would do their best to screw it up were not unfounded. It came with 9 ugly stickers as well as a whole load of badly written Sony software pre-installed. Luckily that could all be cleaned up and I’m really very happy with it.
I’ve moved my writing about internet freedom to a new group blog, Tech Liberty.
Tech Liberty
From the website:
We’re concerned about the erosion of people’s civil liberties in the digital world. Some people seem to think we give up our rights as soon as we do something on the Internet rather than on paper. We don’t agree.
Tech Liberty is dedicated to protecting people’s rights in the areas of the Internet and technology. We make submissions on public policy, help to educate people about their rights, and defend those whose rights are being infringed.
I’ll also be moving some of the resources (such as the Internet Filtering FAQs) over to the Tech Liberty site.
Future for thomasbeagle.net
This means that this blog will go back to what it was before I started concentrating so much on Internet filtering – a place I post random stuff that doesn’t really go anywhere else. Rate of posting is highly variable.
This post is sticky so anything new will be beneath it.
The Law Commission’s report Suppressing Names and Evidence is a waste of time and money. They have spent a lot of time thinking about exactly why, how and what information should be suppressed, while neglecting to consider whether this suppression is even possible.
The Recent Case
I assume you’ve heard of the “well known entertainer” that was recently granted name suppression after the judge discharged them without conviction for offensive behaviour. For some reason a number of people felt it was so important that they tell everyone who it was that I found out their name on:
- Online chat
- Kiwiblog
I’m told it was also on the Trade Me forums as well as many others. Even the Wikipedia page for the performer has the details – if you think to look in the edit history.
Takedown and Blocking Notices
Publication of this sort of information on the Internet can’t be stopped. While you could send a takedown notice to local sites (such as Trademe and Kiwiblog) and expect it to be honoured, overseas sites such as Facebook and Twitter are going to ignore it.
The Law Commission seems to suggest that it will be the responsibility of ISPs to block access to sites publishing such information (recommendation 26 from the report):
Where an internet service provider or content host becomes aware that they are carrying or hosting information that they know is in breach of a suppression order, it should be an offence for them to fail to remove the information or to fail to block access to it as soon as reasonably practicable.
In the case above this would mean that ISPs would have to block the Facebook and Twitter web pages (the nature of these services means that you can’t just block a single piece of information as it could appear on any number of pages/URLs). They’d also have to block a number of other international forum sites. Ultimately, we would end up with the requirement to block every website in the world that contains content submitted by the users of the site.
If we get to this point the Internet in New Zealand is fundamentally broken and we’ve decided to stop being a member of the information age. Obviously this is not going to happen.
Is Name Suppression Dead?
If the courts can’t suppress information on the Internet is there any point continuing with suppression at all?
One counter argument is that not everyone is of as much interest as a “well known entertainer” so in some cases name suppression might continue to work. But the current trend is for people to put more and more of their lives, and the lives of the people they know, online. Over time I expect suppression to become less and less effective, even for people who don’t have a national profile.
You may notice that this article makes no comment on whether name suppression is good or bad. I’ve not always been happy with how it is used but, in general, I’m not completely against the concept, especially when it is used to protect the victim.
The problem is that my opinion, just like that of the Law Commission, is becoming increasingly irrelevant. The Internet does such a good job of sharing information that the idea of being able to control access to that information is becoming obsolete.
Court ordered suppression might work partially for a few more years but the end is in sight. The Law Commission would have done a better job if they had recognised this.
Sometimes it seems that every day there is another threat to people’s abilities to use the Internet. Each special interest group has their own barrow to push, often with honourable intent, that causes them to make impossible or unreasonable demands.
Today’s effort is from the Law Commission. They’ve published their Suppressing Names and Evidence report and it includes the following (recommendation 26 from the report, page 66, PDF):
Where an internet service provider or content host becomes aware that they are carrying or hosting information that they know is in breach of a suppression order, it should be an offence for them to fail to remove the information or to fail to block access to it as soon as reasonably practicable.
There’s nothing new in extending the current rules about not publishing suppressed material to hosting an Internet website publishing the suppressed material. Obviously someone will have to complain to the ISP (Internet Service Provider) that are hosting suppressed information, but the ISP will be able to refer to the judge’s suppression order and remove it. (Although of course there may be times when it is unclear whether a particular piece of information breaches a suppression order.)
The more worrying part is the use of the word “carrying” which, as far as I can tell, can only refer to information that the ISP is carrying between ‘somewhere on the Internet’ and the user.
By demanding that the ISP be able to block access to this information, the Law Commission is requiring all ISPs to implement a filtering system that is capable of blocking any access on any Internet protocol to any Internet address that may have the suppressed information. If they fail to do so, penalties include fines and imprisonment (exactly how you imprison an ISP I am not sure).
There are a number of problems with this:
- Each ISP would have to implement a filtering system (both technically and procedurally) and this would be very expensive.
- It puts an unreasonable responsibility on the ISP.
- Who would be responsible for removing information blocks when a suppression order is lifted?
- Most importantly, what they have asked for is technically impossible to implement.
Why is it technically impossible?
Information is shared on the Internet using a number of different methods (protocols). They include email, online chat, web pages, and peer to peer file-sharing. A number of these different protocols use encryption between the user and the site. For example, banks and online shops all use secure web traffic (HTTPS) to keep your transactions safe from interception.
If a piece of suppressed information is made available at an Internet address that uses encryption, the ISP can’t read the encrypted request and will therefore have to block all traffic to that Internet address. If your online shop uses the same Internet address as a site with suppressed information (sharing Internet addresses is very common, with some addresses hosting thousands of sites), access to your shop will also be blocked.
This means that every time someone overseas publishes information contrary to a suppression order from the New Zealand courts, a number of websites will have to be blocked. This will fundamentally break the Internet in New Zealand.
Of course, if you run a bookstore in New Zealand I suggest that you might find it advantageous to make sure to add some suppressed information to a review on amazon.com!
This doesn’t even cover the technical difficulties and costs involved in deploying a blocking system that can filter everything on the Internet. You may note that the Chinese Government has spent a lot of time and effort building their Great Firewall of China and even that does a poor job of blocking information.
Conclusion
I believe the Law Commission needs to rethink this recommendation. The blocking they have asked for is technically impossible to implement without breaking the Internet.
Of course, if it’s impossible to suppress information on the Internet, is there any point in suppressing it in newspapers and other media? We may have to accept that we cannot suppress information on a pervasive global communications network.
It looks as though the Law Commission’s report may be obsolete on the day it was published.
Back in August I posted about how the Department of Internal Affairs had been deleting the reports used to justify filtering sites in the trial. This seemed a bit suspect to me, especially as they knew I had a request with the Ombudsman appealing their refusal to give me copies of them under the Official Information Act.
It’s also in contravention of the Public Records Act so I sent a letter asking the Chief Archivist what could be done about it. You can download the PDF of their reply, but the gist of it is in the following quote:
The Department of Internal Affairs report that they have taken steps to address this problem. The Department of Internal Affairs have made their staff familiar with the mandatory standards issued by the Chief Archivist that are relevant to managing these records in accordance with the Public Records Act 2005. Steps have also been taken to ensure that website filtering records cannot be deleted without seeking the necessary authority to do so.
I consider that these are appropriate remedial steps that will result in ongoing compliance with the Act.
I’m sure we can all be pleased that the Censorship Unit at the DIA will now do a better job of maintaining their data.
Today I met with some of the staff in the Censorship Unit at the Department of Internal Affairs to discuss the Internet filtering system.
Here’s some of what I learnt:
- The Censorship Unit prosecute approximately 40-50 people a year for trading in child pornography, with a conviction rate of over 90%. Most of these are using P2P file sharing.
- The purpose of the filter is not to stop the hard core traders, but to stop the casual and curious. The view is that a curious person will be sucked into getting more and more.
- The Enterprise (final live system) Internet filtering system will be installed in Auckland, Wellington and Christchurch. Initially all traffic will go through the Auckland location with the others as redundant fail-over sites, eventually the traffic will be load-balanced between the sites.
- The system also has redundant Internet connections.
- The DIA claims that a major outage would be resolved in 5-10 minutes at worst.
- The DIA say that the cost of the system is approximately $30k a year plus Internet and staff costs.
- They really do re-check 7000 sites each month. Apparently there are three checkers who spend about an hour each working day, checking about 120 sites each an hour.
- The Code of Practice is being rewritten somewhat in response to the submissions. In particular, the role of the Independent Reference Group (IRG) will be better defined.
- The IRG will have access to the reports about the websites as well as the details of the appeal.
- The DIA have been speaking to likely bodies to see if they wish to be part of the IRG.
- There may be a role for the Office of Film and Literature Classification in auditing the list of banned sites.
- We confirmed that the system doesn’t work with HTTPS (encrypted web traffic) and the new IP version 6.
- The NetClean (the filtering product being used) contract specifies that the system can only be used to filter child pornography.
- They say that they wouldn’t add Wikileaks to the filter if a copy of the list turned up there.
I will be updating the FAQs accordingly.
I’ve taken a variety of laptops through NZ Customs many times in the past. I’ve also taken through portable hard drives and flash drives. It never occurred to me that Customs might want to search them or even that they had the power to.
It also didn’t occur to Amir Mohammed, a New Zealand man who was charged after Customs searched his laptop and found bestiality videos. Luckily for Amir the courts accepted his excuse and he was discharged without conviction.
Letter to Customs
I wanted to find out more about the powers that Customs had to intercept material in this way so I wrote and asked them about it. Here’s a merging of my questions and their answers (or grab the PDF):
1. What law gives Customs the ability to search the contents of laptop computers or other digital storage devices such as hard disks or flash drives?
Section 151 of the Customs and Excise Act 1996.
2. What type of material is Customs looking for when it performs these searches?
Any goods that may be in breach of New Zealand law.
3. Does Customs looks for material that may be breaching copyright as well as objectional material?
Yes.
4. How does Customs determine that any material found is objectionable?
Section 3 of the Films, Videos and Publications Classification Act 1993 outlines the meaning of objectionable. The New Zealand Customs Service will consider an item in terms of section 3 of this Act and may seize if it is considered to be objectionable. If any clarification on this assessment is needed, Customs will refer good suspected of breaching the Films, Videos and Publications Classification Act 1991 to the Office of Film and Literature Classification (OFLC). The OFLC is then responsible for ruling whether goods are objectionable or not and will advise the Customs Service accordingly.
5. How does Customs determine that any material found is in breach of copyright?
Assuming that this question refers to the importation of commercial digital data, the Customs Service will undertake a comparison of the imported item with the copyright work. The Service will then seek input from both the importer and rights owner, and then make a determination under the Copyright Act 1994.
6. How long can Customs retain possession of a computer or other digital storage device in order to search it?
The Customs Service can detain computers and digital storage devices for a reasonable length of time to allow for examination. This time can vary depending on the circumstances.
7. Does Customs ever copy any data from any device? If so, how long is this data retained for?
Yes, in order to examine a device the contents may be copied. The length of time this data is retained will vary depending on circumstances.
8. What happens if the stored data that Customs wishes to search is protected by a password or is encrypted so that it cannot be read?
I am unable to provide an answer to this question due to operational security.
Some brief comments
- Section 151 of the Customs Act refers to the general ability of Customs to search things and not to computers/data in particular. The Act doesn’t really mention computers or data explicitly.
- There is an interesting parallel to the Internet filtering being implemented by the DIA, although Customs uses the OFLC to help make decisions. I assume that it’s possible to appeal the decisions made by Customs.
- There is also an interesting parallel to the Internet copyright fight too, with Customs actually talking to both parties (importer and rights-holder) before taking possible infringement any further.
- I wonder if they refused to answer the question about encryption because it would be too embarrassing to say “We can’t do anything”. I was actually more interested to hear whether they have the legal right to compel people to provide decryption keys.
I have received another letter from the DIA (PDF) in response to further requests.
As well as a copy of the whitepaper that I’ve already written about, I asked the DIA to clarify “whether the filter will only be used for images of child sexual abuse or will it also be used for text files as described by Trevor Henry, Senior Communications Advisor on 17/7/2009”.
The response from Steve O’Brien, manager of the Censorship Compliance Unit, was as follows:
Unfortunately, Trevor Henry’s statement has been taken by some commentators as “proof” that the scope of the Digital Child Exploitation Filtering System will expand. As stated above, the purpose of the filtering system is to block access to known websites that contain images of child sexual abuse.
Well, I’m glad we cleared that up, the filter is only going to be used for pictures. But wait, what’s this, he hasn’t finished yet:
These websites sometimes also contain text files that exploit children for sexual purposes, and where this occurs those text files will also be blocked.
The concept of a text file exploiting a child seems odd to me.
Even ignoring the slightly ludicrous phrasing, part of the rhetoric around the implementation of the filter has been that they are trying to stop photos of actual abuse – “…will focus solely on websites offering clearly objectionable images of child sexual abuse.” There is no child being abused in a text file.
The examples given by Trevor Henry clearly demonstrate that written material can be just as abusive as pictures.
I don’t believe that he did clearly demonstrate that (read what he wrote). While what he describes is creepy, to my mind there is a clear distinction between writing about how to abuse a child and actually abusing one.
So, has this response cleared anything up? The answer has to be no. The DIA is still claiming that it’s trying to only ban images of child sexual abuse (something I might be inclined to support if they did it openly and if it had any chance of working) while at the same time admitting that they’ll ban other things that aren’t images as well.
The Department of Internal Affairs (DIA) have released to me their report (PDF) on the testing of the Internet Filtering system.
The first half of it is a description of the system and doesn’t really contain much new information (except that we now know it runs on FreeBSD and uses the Quagga BGP daemon).
The second half of it is more interesting as it has some results from the DIA’s testing. This was apparently split into three phases:
- Single ISP with 5,000 users ((already had their own filtering system so it was probably Watchdog).
- Two ISPs with 25,000 users.
- Four ISPs with 600,000 users (at a guess this was when Ihug and TelstraClear joined).
Before we go on, a brief reminder of how it works: The ISP diverts all requests that are on the same Internet address as one of the blocked sites. The filter then checks each diverted request and decides whether to block it or let it through. The filter never sees requests for websites that don’t share an Internet address with a blocked site.
Interceptions
Now, back to the numbers. The phrasing in the whitepaper is a bit hard to interpret, the following is based on my best attempt at understanding it:
In phase 1, the system apparently had 3 million requests diverted to it each month and blocked 10,000 of those requests. This means that only a third of 1% of processed requests ended up being blocked.
In phase 2, there’s 8 million requests per month with 30,000 of them being blocked.
In phase 3, there’s 40 million requests per month with 100,000 of them being blocked.
In other words, there’s a very large number of requests being filtered through the DIA’s server compared to the number that are being blocked.
Effectiveness
There’s no way to measure the effectiveness of the filter at stopping people from finding child pornography – we can’t tell how many people worked around it or downloaded material using peer to peer filesharing or other methods.
One interesting number, however, is the number of blocked requests per user.
In phase 1, there’s 2 blocked requests per user per month (10,000 blocked requests per month/5000 users).
In phase 2, there’s just over 1 blocked request per user per month (average 30,000 blocked requests per month, 25,000 users).
In phase 3, there’s 0.17 (average 100,000 blocked requests per month, 600,000 users).
What’s odd is the way that the number of blocked requests per user go down phase by phase. I have no idea what this indicates.
Robustness
According to the report, the system was operating at 80% capacity in the third phase. Apparently this was a bit much for it as: “the system did experience some stability issues processing this amount of requests and required maintenance on two occasions to replace hardware.”
There is no further detail about whether the “80% capacity” referred to the performance of the filtering system or the Internet connection they were using.
I sent in a submission to the Department of Internal Affairs about the Internet filtering scheme. Originally I was intending to run the normal arguments (which I’m sure anyone reading this is already familiar with) but I started to think about the constitutionality of the scheme – and my submission got away on me a bit.
Here’s some of the questions I’m thinking about:
- Where does the DIA derive the authority to create and implement the Internet filtering scheme from? I can’t find any basis for it in the 1993 Films, Videos and Publications Classification Act that they mention in the draft Code of Practice.
- Can Government departments just make new powers up? What avenues are available to stop them?
- Even if there was some basis in the 1993 law, it’s very clear about censorship decisions having to be published, and also defines an accountable appeals process. The DIA seems to have completely ignored this when designing their scheme. Again, how can they do this and how can we stop them?
- When a new law is proposed that has Bill of Rights implications (“Everyone has the right to be secure against unreasonable search or seizure, whether of the person, property, or correspondence or otherwise” seems relevant), the Attorney-General has to submit a report to Parliament on those implications. Is there anything similar when Government departments create new powers for themselves? How does the Bill of Rights work if government departments can just ignore it?
If anyone has any answers to these questions or can point me in the direction of a constitutional lawyer who wouldn’t mind giving some free advice for a good cause, I’d be very grateful.